Test Report: Docker_Linux_crio_arm64 17485

                    
                      8dc642b39e51c59087e6696ac1afe8c1c527ee77:2023-10-24:31589
                    
                

Test fail (11/307)

x
+
TestAddons/parallel/Ingress (484.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-228070 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-228070 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-228070 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d55f9bf6-38ea-4587-adb0-f64601bb7bf1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:249: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:249: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-228070 -n addons-228070
addons_test.go:249: TestAddons/parallel/Ingress: showing logs for failed pods as of 2023-10-24 19:35:24.588154084 +0000 UTC m=+715.631191380
addons_test.go:249: (dbg) Run:  kubectl --context addons-228070 describe po nginx -n default
addons_test.go:249: (dbg) kubectl --context addons-228070 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-228070/192.168.49.2
Start Time:       Tue, 24 Oct 2023 19:27:24 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m8lx7 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-m8lx7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-228070
Warning  Failed     7m29s                  kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     6m14s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    4m27s (x4 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3m43s (x4 over 7m29s)  kubelet            Error: ErrImagePull
Warning  Failed     3m43s (x2 over 5m14s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     3m29s (x6 over 7m29s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    2m51s (x9 over 7m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:249: (dbg) Run:  kubectl --context addons-228070 logs nginx -n default
addons_test.go:249: (dbg) Non-zero exit: kubectl --context addons-228070 logs nginx -n default: exit status 1 (105.349805ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:249: kubectl --context addons-228070 logs nginx -n default: exit status 1
addons_test.go:250: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-228070
helpers_test.go:235: (dbg) docker inspect addons-228070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40",
	        "Created": "2023-10-24T19:24:21.412947887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118596,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:24:21.72306644Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/hosts",
	        "LogPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40-json.log",
	        "Name": "/addons-228070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-228070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-228070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-228070",
	                "Source": "/var/lib/docker/volumes/addons-228070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-228070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-228070",
	                "name.minikube.sigs.k8s.io": "addons-228070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0b8279e960ee9cf210571e91a7a50c0a03039aa250d378ad0b781b6177f7a86",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e0b8279e960e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-228070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b8190648a249",
	                        "addons-228070"
	                    ],
	                    "NetworkID": "269732e24e22caf879a9ab6a4e06c7cd3d21ef6dc936ec12a30edb19d0435768",
	                    "EndpointID": "b5785c518fdf9d4bf4c4ed803a4f8e63d3864a49823611f4aafdf9feed8c130d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-228070 -n addons-228070
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-228070 logs -n 25: (1.614683888s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | -p download-only-654862                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | -p download-only-654862                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| delete  | -p download-only-654862                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| delete  | -p download-only-654862                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| start   | --download-only -p                                                                          | download-docker-959559 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | download-docker-959559                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-959559                                                                   | download-docker-959559 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-775727   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | binary-mirror-775727                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38809                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-775727                                                                     | binary-mirror-775727   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| addons  | enable dashboard -p                                                                         | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-228070 --wait=true                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:26 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | -p addons-228070                                                                            |                        |         |         |                     |                     |
	| ip      | addons-228070 ip                                                                            | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	| addons  | addons-228070 addons disable                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-228070 ssh cat                                                                       | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | /opt/local-path-provisioner/pvc-320b3b4e-2781-4009-93c4-e0f32e3a5a23_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-228070 addons disable                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | -p addons-228070                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | addons-228070 addons                                                                        | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:23:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:23:58.232556 1118138 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:23:58.232769 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:58.232795 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:23:58.232817 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:58.233102 1118138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:23:58.233565 1118138 out.go:303] Setting JSON to false
	I1024 19:23:58.234690 1118138 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32788,"bootTime":1698142651,"procs":384,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:23:58.234791 1118138 start.go:138] virtualization:  
	I1024 19:23:58.238010 1118138 out.go:177] * [addons-228070] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:23:58.240982 1118138 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:23:58.243032 1118138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:23:58.241121 1118138 notify.go:220] Checking for updates...
	I1024 19:23:58.245605 1118138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:23:58.247506 1118138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:23:58.249749 1118138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:23:58.251649 1118138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:23:58.254089 1118138 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:23:58.280909 1118138 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:23:58.281026 1118138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:58.357069 1118138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:58.347594636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:58.357170 1118138 docker.go:295] overlay module found
	I1024 19:23:58.360770 1118138 out.go:177] * Using the docker driver based on user configuration
	I1024 19:23:58.363019 1118138 start.go:298] selected driver: docker
	I1024 19:23:58.363036 1118138 start.go:902] validating driver "docker" against <nil>
	I1024 19:23:58.363049 1118138 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:23:58.363666 1118138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:58.431801 1118138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:58.422643161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:58.432016 1118138 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:23:58.432236 1118138 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:23:58.434443 1118138 out.go:177] * Using Docker driver with root privileges
	I1024 19:23:58.436442 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:23:58.436466 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:23:58.436477 1118138 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:23:58.436492 1118138 start_flags.go:323] config:
	{Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:23:58.438985 1118138 out.go:177] * Starting control plane node addons-228070 in cluster addons-228070
	I1024 19:23:58.441075 1118138 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:23:58.443203 1118138 out.go:177] * Pulling base image ...
	I1024 19:23:58.445359 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:23:58.445401 1118138 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1024 19:23:58.445412 1118138 cache.go:57] Caching tarball of preloaded images
	I1024 19:23:58.445460 1118138 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:23:58.445495 1118138 preload.go:174] Found /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1024 19:23:58.445505 1118138 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:23:58.445934 1118138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json ...
	I1024 19:23:58.445967 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json: {Name:mk6577e7c79f8446f59999ab7a22676511cb2efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:23:58.462487 1118138 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:23:58.462614 1118138 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:23:58.462633 1118138 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1024 19:23:58.462638 1118138 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1024 19:23:58.462645 1118138 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:23:58.462650 1118138 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1024 19:24:13.995511 1118138 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1024 19:24:13.995549 1118138 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:24:13.995618 1118138 start.go:365] acquiring machines lock for addons-228070: {Name:mke1bcca4f678271bb257b8b6dc020a3e38db683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:24:13.995753 1118138 start.go:369] acquired machines lock for "addons-228070" in 104.656µs
	I1024 19:24:13.995791 1118138 start.go:93] Provisioning new machine with config: &{Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:24:13.995878 1118138 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:24:13.998404 1118138 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1024 19:24:13.998665 1118138 start.go:159] libmachine.API.Create for "addons-228070" (driver="docker")
	I1024 19:24:13.998695 1118138 client.go:168] LocalClient.Create starting
	I1024 19:24:13.998799 1118138 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 19:24:14.270046 1118138 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 19:24:14.742820 1118138 cli_runner.go:164] Run: docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:24:14.763418 1118138 cli_runner.go:211] docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:24:14.763511 1118138 network_create.go:281] running [docker network inspect addons-228070] to gather additional debugging logs...
	I1024 19:24:14.763532 1118138 cli_runner.go:164] Run: docker network inspect addons-228070
	W1024 19:24:14.784008 1118138 cli_runner.go:211] docker network inspect addons-228070 returned with exit code 1
	I1024 19:24:14.784048 1118138 network_create.go:284] error running [docker network inspect addons-228070]: docker network inspect addons-228070: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-228070 not found
	I1024 19:24:14.784061 1118138 network_create.go:286] output of [docker network inspect addons-228070]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-228070 not found
	
	** /stderr **
	I1024 19:24:14.784164 1118138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:24:14.801967 1118138 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002993c60}
	I1024 19:24:14.802009 1118138 network_create.go:124] attempt to create docker network addons-228070 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:24:14.802074 1118138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-228070 addons-228070
	I1024 19:24:14.871195 1118138 network_create.go:108] docker network addons-228070 192.168.49.0/24 created
	I1024 19:24:14.871226 1118138 kic.go:118] calculated static IP "192.168.49.2" for the "addons-228070" container
	I1024 19:24:14.871316 1118138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:24:14.888319 1118138 cli_runner.go:164] Run: docker volume create addons-228070 --label name.minikube.sigs.k8s.io=addons-228070 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:24:14.906757 1118138 oci.go:103] Successfully created a docker volume addons-228070
	I1024 19:24:14.906850 1118138 cli_runner.go:164] Run: docker run --rm --name addons-228070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --entrypoint /usr/bin/test -v addons-228070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:24:17.030915 1118138 cli_runner.go:217] Completed: docker run --rm --name addons-228070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --entrypoint /usr/bin/test -v addons-228070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (2.1240206s)
	I1024 19:24:17.030945 1118138 oci.go:107] Successfully prepared a docker volume addons-228070
	I1024 19:24:17.030977 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:17.031005 1118138 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:24:17.031077 1118138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-228070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:24:21.327431 1118138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-228070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.296316167s)
	I1024 19:24:21.327469 1118138 kic.go:200] duration metric: took 4.296462 seconds to extract preloaded images to volume
	W1024 19:24:21.327609 1118138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:24:21.327730 1118138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:24:21.396851 1118138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-228070 --name addons-228070 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-228070 --network addons-228070 --ip 192.168.49.2 --volume addons-228070:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:24:21.731384 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Running}}
	I1024 19:24:21.757923 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:21.781989 1118138 cli_runner.go:164] Run: docker exec addons-228070 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:24:21.871431 1118138 oci.go:144] the created container "addons-228070" has a running status.
	I1024 19:24:21.871458 1118138 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa...
	I1024 19:24:22.668817 1118138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:24:22.693239 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:22.713484 1118138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:24:22.713503 1118138 kic_runner.go:114] Args: [docker exec --privileged addons-228070 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:24:22.804813 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:22.828070 1118138 machine.go:88] provisioning docker machine ...
	I1024 19:24:22.828100 1118138 ubuntu.go:169] provisioning hostname "addons-228070"
	I1024 19:24:22.828168 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:22.849460 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:22.849958 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:22.849981 1118138 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-228070 && echo "addons-228070" | sudo tee /etc/hostname
	I1024 19:24:23.018662 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-228070
	
	I1024 19:24:23.018749 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.046331 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:23.046754 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:23.046780 1118138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-228070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-228070/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-228070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:24:23.187139 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:24:23.187176 1118138 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 19:24:23.187216 1118138 ubuntu.go:177] setting up certificates
	I1024 19:24:23.187225 1118138 provision.go:83] configureAuth start
	I1024 19:24:23.187294 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:23.207616 1118138 provision.go:138] copyHostCerts
	I1024 19:24:23.207696 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 19:24:23.207820 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 19:24:23.207882 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 19:24:23.207928 1118138 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.addons-228070 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-228070]
	I1024 19:24:23.721200 1118138 provision.go:172] copyRemoteCerts
	I1024 19:24:23.721272 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:24:23.721313 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.741770 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:23.840877 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:24:23.870058 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 19:24:23.898446 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:24:23.926742 1118138 provision.go:86] duration metric: configureAuth took 739.501162ms
	I1024 19:24:23.926774 1118138 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:24:23.926959 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:23.927076 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.946138 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:23.946585 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:23.946607 1118138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:24:24.200040 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:24:24.200062 1118138 machine.go:91] provisioned docker machine in 1.3719725s
	I1024 19:24:24.200073 1118138 client.go:171] LocalClient.Create took 10.201368675s
	I1024 19:24:24.200085 1118138 start.go:167] duration metric: libmachine.API.Create for "addons-228070" took 10.20142095s
	I1024 19:24:24.200092 1118138 start.go:300] post-start starting for "addons-228070" (driver="docker")
	I1024 19:24:24.200102 1118138 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:24:24.200171 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:24:24.200230 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.218515 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.316721 1118138 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:24:24.320822 1118138 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:24:24.320928 1118138 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:24:24.320948 1118138 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:24:24.320961 1118138 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:24:24.320972 1118138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 19:24:24.321041 1118138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 19:24:24.321067 1118138 start.go:303] post-start completed in 120.968651ms
	I1024 19:24:24.321384 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:24.342006 1118138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json ...
	I1024 19:24:24.342294 1118138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:24:24.342350 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.360217 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.459651 1118138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:24:24.465145 1118138 start.go:128] duration metric: createHost completed in 10.4692534s
	I1024 19:24:24.465170 1118138 start.go:83] releasing machines lock for "addons-228070", held for 10.469402692s
	I1024 19:24:24.465261 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:24.482884 1118138 ssh_runner.go:195] Run: cat /version.json
	I1024 19:24:24.482933 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.482941 1118138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:24:24.482998 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.502413 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.503796 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.732665 1118138 ssh_runner.go:195] Run: systemctl --version
	I1024 19:24:24.738069 1118138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:24:24.886545 1118138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:24:24.891863 1118138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:24:24.915046 1118138 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:24:24.915123 1118138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:24:24.956798 1118138 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:24:24.956823 1118138 start.go:472] detecting cgroup driver to use...
	I1024 19:24:24.956855 1118138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:24:24.956903 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:24:24.974347 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:24:24.987765 1118138 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:24:24.987873 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:24:25.005526 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:24:25.023278 1118138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:24:25.121237 1118138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:24:25.231286 1118138 docker.go:214] disabling docker service ...
	I1024 19:24:25.231355 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:24:25.252148 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:24:25.265782 1118138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:24:25.375050 1118138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:24:25.483022 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:24:25.496398 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:24:25.516259 1118138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:24:25.516348 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.528041 1118138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:24:25.528156 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.543078 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.555925 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.568416 1118138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:24:25.579758 1118138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:24:25.589957 1118138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:24:25.599891 1118138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:24:25.696124 1118138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:24:25.822704 1118138 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:24:25.822816 1118138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:24:25.828130 1118138 start.go:540] Will wait 60s for crictl version
	I1024 19:24:25.828213 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:24:25.832525 1118138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:24:25.873870 1118138 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:24:25.873993 1118138 ssh_runner.go:195] Run: crio --version
	I1024 19:24:25.916762 1118138 ssh_runner.go:195] Run: crio --version
	I1024 19:24:25.966012 1118138 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:24:25.968232 1118138 cli_runner.go:164] Run: docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:24:25.985261 1118138 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:24:25.989967 1118138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:26.003581 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:26.003659 1118138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:26.077851 1118138 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:24:26.077873 1118138 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:24:26.077932 1118138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:26.121302 1118138 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:24:26.121321 1118138 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:24:26.121394 1118138 ssh_runner.go:195] Run: crio config
	I1024 19:24:26.179430 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:24:26.179460 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:24:26.179502 1118138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:24:26.179521 1118138 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-228070 NodeName:addons-228070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:24:26.179690 1118138 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-228070"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:24:26.179795 1118138 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-228070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:24:26.179866 1118138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:24:26.190441 1118138 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:24:26.190515 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:24:26.200604 1118138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1024 19:24:26.221125 1118138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:24:26.241833 1118138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1024 19:24:26.262552 1118138 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:24:26.266859 1118138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:26.281366 1118138 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070 for IP: 192.168.49.2
	I1024 19:24:26.281402 1118138 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.281523 1118138 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 19:24:26.719818 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt ...
	I1024 19:24:26.719859 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt: {Name:mk176e869d131afd9ab971311c554f848d81b3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.720114 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key ...
	I1024 19:24:26.720128 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key: {Name:mkc4569e00ec9c92d961853afdbc997153c81aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.720241 1118138 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 19:24:26.910532 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt ...
	I1024 19:24:26.910563 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt: {Name:mk2dd6a0990851e5951a630cf5c87b30ece8682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.911299 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key ...
	I1024 19:24:26.911314 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key: {Name:mk11f100c5c66245f4cde45e3c4db06a91481f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.911452 1118138 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key
	I1024 19:24:26.911469 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt with IP's: []
	I1024 19:24:27.344410 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt ...
	I1024 19:24:27.344444 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: {Name:mk399e7421911ced5fee71a70a59e55b4f23142d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.344673 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key ...
	I1024 19:24:27.344687 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key: {Name:mkc3dffac2f70d7596c74d18e8c0cf4da87d8abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.344781 1118138 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2
	I1024 19:24:27.344801 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:24:27.876735 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 ...
	I1024 19:24:27.876767 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2: {Name:mk27de5254dc11d4cd709dbdcd82e677694dcf42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.876964 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2 ...
	I1024 19:24:27.876978 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2: {Name:mk04e2fa075ab478d73c4d86c4ae72e310d34944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.877078 1118138 certs.go:337] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt
	I1024 19:24:27.877155 1118138 certs.go:341] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key
	I1024 19:24:27.877207 1118138 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key
	I1024 19:24:27.877230 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt with IP's: []
	I1024 19:24:28.121992 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt ...
	I1024 19:24:28.122028 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt: {Name:mk6cdf942fe9e14df7951e6b4e10399fd12acdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:28.122782 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key ...
	I1024 19:24:28.122800 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key: {Name:mk4564b5419d3b8595885ab9ca5c11a2f75bfb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:28.123413 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:24:28.123469 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:24:28.123499 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:24:28.123527 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 19:24:28.124189 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:24:28.151752 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:24:28.179813 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:24:28.207704 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:24:28.235828 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:24:28.265128 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:24:28.291893 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:24:28.319552 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:24:28.347109 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:24:28.374334 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:24:28.394552 1118138 ssh_runner.go:195] Run: openssl version
	I1024 19:24:28.401159 1118138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:24:28.412536 1118138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.416867 1118138 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.416999 1118138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.425195 1118138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:24:28.436469 1118138 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:24:28.440626 1118138 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:24:28.440692 1118138 kubeadm.go:404] StartCluster: {Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:24:28.440772 1118138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:24:28.440837 1118138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:24:28.485378 1118138 cri.go:89] found id: ""
	I1024 19:24:28.485449 1118138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:24:28.495571 1118138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:24:28.505651 1118138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:24:28.505782 1118138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:24:28.515720 1118138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:24:28.515763 1118138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:24:28.568155 1118138 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:24:28.568379 1118138 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:24:28.613697 1118138 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:24:28.613812 1118138 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1024 19:24:28.613870 1118138 kubeadm.go:322] OS: Linux
	I1024 19:24:28.613948 1118138 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:24:28.614025 1118138 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:24:28.614103 1118138 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:24:28.614180 1118138 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:24:28.614255 1118138 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:24:28.614336 1118138 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:24:28.614399 1118138 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1024 19:24:28.614477 1118138 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1024 19:24:28.614541 1118138 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1024 19:24:28.694062 1118138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:24:28.694189 1118138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:24:28.694325 1118138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:24:28.946227 1118138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:24:28.950063 1118138 out.go:204]   - Generating certificates and keys ...
	I1024 19:24:28.950290 1118138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:24:28.950452 1118138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:24:29.274067 1118138 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:24:29.695234 1118138 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:24:29.854908 1118138 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:24:30.296335 1118138 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:24:30.842365 1118138 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:24:30.842766 1118138 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-228070 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:24:31.176661 1118138 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:24:31.177080 1118138 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-228070 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:24:31.851112 1118138 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:24:32.489345 1118138 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:24:32.620971 1118138 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:24:32.621298 1118138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:24:33.029425 1118138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:24:33.699791 1118138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:24:33.963532 1118138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:24:34.513839 1118138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:24:34.514448 1118138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:24:34.518977 1118138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:24:34.524388 1118138 out.go:204]   - Booting up control plane ...
	I1024 19:24:34.524540 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:24:34.524617 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:24:34.524683 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:24:34.534498 1118138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:24:34.535552 1118138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:24:34.535827 1118138 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:24:34.636844 1118138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:24:41.139122 1118138 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502383 seconds
	I1024 19:24:41.139248 1118138 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:24:41.156124 1118138 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:24:41.680905 1118138 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:24:41.681093 1118138 kubeadm.go:322] [mark-control-plane] Marking the node addons-228070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:24:42.193548 1118138 kubeadm.go:322] [bootstrap-token] Using token: zhjfdy.ymp8jw4z2hzsevhw
	I1024 19:24:42.195524 1118138 out.go:204]   - Configuring RBAC rules ...
	I1024 19:24:42.195645 1118138 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:24:42.203319 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:24:42.212271 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:24:42.216415 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:24:42.220711 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:24:42.225184 1118138 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:24:42.240674 1118138 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:24:42.486001 1118138 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:24:42.620694 1118138 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:24:42.621866 1118138 kubeadm.go:322] 
	I1024 19:24:42.621942 1118138 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:24:42.621949 1118138 kubeadm.go:322] 
	I1024 19:24:42.622022 1118138 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:24:42.622032 1118138 kubeadm.go:322] 
	I1024 19:24:42.622057 1118138 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:24:42.622112 1118138 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:24:42.622164 1118138 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:24:42.622175 1118138 kubeadm.go:322] 
	I1024 19:24:42.622230 1118138 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:24:42.622237 1118138 kubeadm.go:322] 
	I1024 19:24:42.622282 1118138 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:24:42.622291 1118138 kubeadm.go:322] 
	I1024 19:24:42.622340 1118138 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:24:42.622413 1118138 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:24:42.622481 1118138 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:24:42.622490 1118138 kubeadm.go:322] 
	I1024 19:24:42.622569 1118138 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:24:42.622662 1118138 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:24:42.622671 1118138 kubeadm.go:322] 
	I1024 19:24:42.622749 1118138 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zhjfdy.ymp8jw4z2hzsevhw \
	I1024 19:24:42.622851 1118138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 19:24:42.622876 1118138 kubeadm.go:322] 	--control-plane 
	I1024 19:24:42.622884 1118138 kubeadm.go:322] 
	I1024 19:24:42.622963 1118138 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:24:42.622972 1118138 kubeadm.go:322] 
	I1024 19:24:42.623053 1118138 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zhjfdy.ymp8jw4z2hzsevhw \
	I1024 19:24:42.623152 1118138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 19:24:42.627275 1118138 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 19:24:42.627434 1118138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:24:42.627466 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:24:42.627479 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:24:42.629976 1118138 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:24:42.632063 1118138 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:24:42.638196 1118138 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:24:42.638217 1118138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:24:42.676499 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:24:43.538699 1118138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:24:43.538845 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.538920 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=addons-228070 minikube.k8s.io/updated_at=2023_10_24T19_24_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.559281 1118138 ops.go:34] apiserver oom_adj: -16
	I1024 19:24:43.684322 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.808875 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:44.420675 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:44.920535 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:45.421205 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:45.921505 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:46.421034 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:46.921121 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:47.420585 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:47.920597 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:48.420585 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:48.921507 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:49.421162 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:49.921103 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:50.421526 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:50.921344 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:51.421021 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:51.921249 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:52.420507 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:52.921134 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:53.420972 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:53.921408 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:54.420578 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:54.921450 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:55.420573 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:55.514709 1118138 kubeadm.go:1081] duration metric: took 11.975908408s to wait for elevateKubeSystemPrivileges.
	I1024 19:24:55.514743 1118138 kubeadm.go:406] StartCluster complete in 27.074072739s
	I1024 19:24:55.514760 1118138 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:55.514888 1118138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:24:55.515268 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:55.515453 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:24:55.515728 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:55.515841 1118138 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1024 19:24:55.515930 1118138 addons.go:69] Setting volumesnapshots=true in profile "addons-228070"
	I1024 19:24:55.515945 1118138 addons.go:231] Setting addon volumesnapshots=true in "addons-228070"
	I1024 19:24:55.515978 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.516428 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.516879 1118138 addons.go:69] Setting cloud-spanner=true in profile "addons-228070"
	I1024 19:24:55.516895 1118138 addons.go:231] Setting addon cloud-spanner=true in "addons-228070"
	I1024 19:24:55.516950 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.517317 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.517839 1118138 addons.go:69] Setting metrics-server=true in profile "addons-228070"
	I1024 19:24:55.517871 1118138 addons.go:231] Setting addon metrics-server=true in "addons-228070"
	I1024 19:24:55.517930 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.518375 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.518807 1118138 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-228070"
	I1024 19:24:55.518848 1118138 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-228070"
	I1024 19:24:55.518878 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.519239 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.527631 1118138 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-228070"
	I1024 19:24:55.529901 1118138 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-228070"
	I1024 19:24:55.529966 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.530390 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532274 1118138 addons.go:69] Setting registry=true in profile "addons-228070"
	I1024 19:24:55.555531 1118138 addons.go:231] Setting addon registry=true in "addons-228070"
	I1024 19:24:55.555637 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.556101 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528601 1118138 addons.go:69] Setting default-storageclass=true in profile "addons-228070"
	I1024 19:24:55.559683 1118138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-228070"
	I1024 19:24:55.560033 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528614 1118138 addons.go:69] Setting gcp-auth=true in profile "addons-228070"
	I1024 19:24:55.587643 1118138 mustload.go:65] Loading cluster: addons-228070
	I1024 19:24:55.587930 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:55.588293 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532412 1118138 addons.go:69] Setting storage-provisioner=true in profile "addons-228070"
	I1024 19:24:55.597285 1118138 addons.go:231] Setting addon storage-provisioner=true in "addons-228070"
	I1024 19:24:55.597367 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.528620 1118138 addons.go:69] Setting ingress=true in profile "addons-228070"
	I1024 19:24:55.602093 1118138 addons.go:231] Setting addon ingress=true in "addons-228070"
	I1024 19:24:55.602178 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.602656 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528628 1118138 addons.go:69] Setting inspektor-gadget=true in profile "addons-228070"
	I1024 19:24:55.633107 1118138 addons.go:231] Setting addon inspektor-gadget=true in "addons-228070"
	I1024 19:24:55.633190 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.633664 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528624 1118138 addons.go:69] Setting ingress-dns=true in profile "addons-228070"
	I1024 19:24:55.635263 1118138 addons.go:231] Setting addon ingress-dns=true in "addons-228070"
	I1024 19:24:55.635345 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.635801 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532420 1118138 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-228070"
	I1024 19:24:55.655539 1118138 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-228070"
	I1024 19:24:55.655895 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.678920 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.717331 1118138 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1024 19:24:55.742214 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:24:55.742235 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:24:55.742297 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.750354 1118138 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1024 19:24:55.742166 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:24:55.758510 1118138 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-228070" context rescaled to 1 replicas
	I1024 19:24:55.760789 1118138 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1024 19:24:55.767484 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1024 19:24:55.767492 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1024 19:24:55.768762 1118138 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1024 19:24:55.768765 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1024 19:24:55.768779 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1024 19:24:55.768827 1118138 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:24:55.772174 1118138 out.go:177] * Verifying Kubernetes components...
	I1024 19:24:55.770287 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.773632 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:24:55.774217 1118138 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:24:55.774226 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1024 19:24:55.788625 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1024 19:24:55.793353 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1024 19:24:55.795552 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1024 19:24:55.800312 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1024 19:24:55.800561 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1024 19:24:55.809899 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1024 19:24:55.809966 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.801443 1118138 addons.go:231] Setting addon default-storageclass=true in "addons-228070"
	I1024 19:24:55.801595 1118138 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:24:55.801605 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1024 19:24:55.808293 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.819949 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1024 19:24:55.812652 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1024 19:24:55.812658 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1024 19:24:55.812684 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.812693 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1024 19:24:55.812752 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.846835 1118138 out.go:177]   - Using image docker.io/registry:2.8.3
	I1024 19:24:55.848943 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1024 19:24:55.845726 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.845788 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.872333 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1024 19:24:55.872352 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1024 19:24:55.872414 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.890840 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:24:55.851076 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1024 19:24:55.890142 1118138 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-228070"
	I1024 19:24:55.893193 1118138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:24:55.893200 1118138 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1024 19:24:55.893208 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1024 19:24:55.894651 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.894854 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.895298 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.904638 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:24:55.904724 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.906410 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:24:55.908602 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:24:55.910539 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1024 19:24:55.910562 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1024 19:24:55.910629 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.926387 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:55.930149 1118138 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:24:55.930168 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1024 19:24:55.930228 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.992248 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.081004 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.087333 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.149997 1118138 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:24:56.150017 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:24:56.150083 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:56.150405 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.151384 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.178080 1118138 out.go:177]   - Using image docker.io/busybox:stable
	I1024 19:24:56.180004 1118138 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1024 19:24:56.182144 1118138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:24:56.182165 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1024 19:24:56.182232 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:56.194622 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.201976 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.215766 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.216334 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.245011 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.256743 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.377119 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:24:56.377187 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1024 19:24:56.533711 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1024 19:24:56.593659 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:24:56.610472 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:24:56.610496 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:24:56.615550 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1024 19:24:56.615572 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1024 19:24:56.620214 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:24:56.630230 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1024 19:24:56.630253 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1024 19:24:56.660552 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:24:56.708016 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1024 19:24:56.708040 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1024 19:24:56.728169 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:24:56.730466 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1024 19:24:56.730486 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1024 19:24:56.788208 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:24:56.788232 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:24:56.799812 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1024 19:24:56.799835 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1024 19:24:56.803178 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:24:56.820634 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1024 19:24:56.820658 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1024 19:24:56.835214 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1024 19:24:56.835238 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1024 19:24:56.840861 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:24:56.917240 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:24:56.917265 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1024 19:24:56.980017 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1024 19:24:56.980039 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1024 19:24:56.983461 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1024 19:24:56.983482 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1024 19:24:56.992223 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:24:57.007277 1118138 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.246351794s)
	I1024 19:24:57.007307 1118138 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:24:57.007352 1118138 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.205892241s)
	I1024 19:24:57.008162 1118138 node_ready.go:35] waiting up to 6m0s for node "addons-228070" to be "Ready" ...
	I1024 19:24:57.012202 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1024 19:24:57.012225 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1024 19:24:57.103718 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:24:57.156353 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1024 19:24:57.156378 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1024 19:24:57.165535 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1024 19:24:57.165557 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1024 19:24:57.234396 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1024 19:24:57.234422 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1024 19:24:57.374340 1118138 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:24:57.374365 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1024 19:24:57.393545 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1024 19:24:57.393570 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1024 19:24:57.415728 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1024 19:24:57.415753 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1024 19:24:57.505624 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:24:57.510751 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1024 19:24:57.510776 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1024 19:24:57.520254 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1024 19:24:57.520277 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1024 19:24:57.588864 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1024 19:24:57.588889 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1024 19:24:57.624493 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:24:57.624519 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1024 19:24:57.650519 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1024 19:24:57.650546 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1024 19:24:57.780782 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:24:57.864333 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1024 19:24:57.864357 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1024 19:24:57.976757 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:24:57.976782 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1024 19:24:58.230749 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:24:59.164004 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:00.071163 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.477434466s)
	I1024 19:25:00.071286 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.53755175s)
	I1024 19:25:01.478199 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:01.622742 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.00249119s)
	I1024 19:25:01.622777 1118138 addons.go:467] Verifying addon ingress=true in "addons-228070"
	I1024 19:25:01.624955 1118138 out.go:177] * Verifying ingress addon...
	I1024 19:25:01.622965 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.962387204s)
	I1024 19:25:01.623003 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.894809338s)
	I1024 19:25:01.623026 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.819827234s)
	I1024 19:25:01.623063 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.782179714s)
	I1024 19:25:01.623118 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.630869131s)
	I1024 19:25:01.623154 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.5194104s)
	I1024 19:25:01.623245 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.117595957s)
	I1024 19:25:01.623312 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.842494726s)
	I1024 19:25:01.625526 1118138 addons.go:467] Verifying addon metrics-server=true in "addons-228070"
	I1024 19:25:01.625544 1118138 addons.go:467] Verifying addon registry=true in "addons-228070"
	W1024 19:25:01.625585 1118138 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:25:01.628811 1118138 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:25:01.630533 1118138 out.go:177] * Verifying registry addon...
	I1024 19:25:01.633531 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1024 19:25:01.630699 1118138 retry.go:31] will retry after 141.318584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:25:01.650278 1118138 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:25:01.650352 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:01.660265 1118138 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:25:01.660285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:01.662407 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1024 19:25:01.666883 1118138 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1024 19:25:01.671515 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:01.775942 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:25:01.991669 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.760867475s)
	I1024 19:25:01.991752 1118138 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-228070"
	I1024 19:25:02.000854 1118138 out.go:177] * Verifying csi-hostpath-driver addon...
	I1024 19:25:02.003831 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1024 19:25:02.024658 1118138 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:25:02.024725 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.043302 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.167189 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:02.176485 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:02.550956 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.643006 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1024 19:25:02.643115 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:25:02.683209 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:25:02.688567 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:02.702592 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:02.908581 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1024 19:25:02.934794 1118138 addons.go:231] Setting addon gcp-auth=true in "addons-228070"
	I1024 19:25:02.934867 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:25:02.935424 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:25:02.974249 1118138 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1024 19:25:02.974307 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:25:03.009972 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:25:03.060334 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:03.173140 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:03.196074 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:03.308478 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.532415403s)
	I1024 19:25:03.310825 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:25:03.312888 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1024 19:25:03.315012 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1024 19:25:03.315035 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1024 19:25:03.375614 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1024 19:25:03.375676 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1024 19:25:03.457143 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:25:03.457211 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1024 19:25:03.506502 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:25:03.563037 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:03.695660 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:03.703154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:03.957133 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:04.048989 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:04.167399 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:04.176688 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:04.561282 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:04.616861 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.110273178s)
	I1024 19:25:04.618297 1118138 addons.go:467] Verifying addon gcp-auth=true in "addons-228070"
	I1024 19:25:04.620171 1118138 out.go:177] * Verifying gcp-auth addon...
	I1024 19:25:04.623255 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1024 19:25:04.629090 1118138 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1024 19:25:04.629110 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:04.633620 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:04.666961 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:04.687487 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:05.047483 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:05.137843 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:05.167054 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:05.175999 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:05.548408 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:05.638092 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:05.668131 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:05.678331 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:06.048829 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:06.137935 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:06.167611 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:06.176038 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:06.456859 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:06.551342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:06.640930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:06.667669 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:06.678564 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:07.049559 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:07.137966 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:07.167029 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:07.177952 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:07.557431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:07.639498 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:07.668043 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:07.676623 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:08.049413 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:08.138381 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:08.176369 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:08.183560 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:08.457390 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:08.547734 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:08.637430 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:08.666857 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:08.675957 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:09.049270 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:09.137807 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:09.167323 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:09.175484 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:09.548606 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:09.637876 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:09.666888 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:09.675702 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.048859 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:10.137561 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:10.167618 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:10.175777 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.549186 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:10.637654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:10.666832 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:10.675668 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.957629 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:11.048791 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:11.138150 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:11.166669 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:11.175818 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:11.547819 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:11.644608 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:11.673393 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:11.683627 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:12.048336 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:12.137203 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:12.167502 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:12.175742 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:12.548152 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:12.637914 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:12.666472 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:12.675550 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:13.048242 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:13.138046 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:13.166742 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:13.175688 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:13.456416 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:13.548046 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:13.637519 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:13.668226 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:13.676050 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:14.048664 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:14.137433 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:14.167121 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:14.175959 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:14.548197 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:14.637655 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:14.666757 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:14.675798 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:15.048537 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:15.137664 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:15.167087 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:15.176238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:15.456812 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:15.547770 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:15.637222 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:15.667137 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:15.676119 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:16.048662 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:16.137507 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:16.166742 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:16.175739 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:16.548796 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:16.638212 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:16.666900 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:16.676037 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:17.047957 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:17.137397 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:17.167384 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:17.176110 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:17.457316 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:17.548431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:17.637849 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:17.666812 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:17.675733 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:18.048406 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:18.137978 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:18.167197 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:18.176099 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:18.548216 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:18.637794 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:18.667302 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:18.676070 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.048195 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:19.137114 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:19.167078 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:19.176126 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.547421 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:19.637355 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:19.667096 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:19.676057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.956353 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:20.048248 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:20.137057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:20.167515 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:20.175834 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:20.547996 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:20.637631 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:20.666673 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:20.675208 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.048317 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:21.138086 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:21.167102 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:21.175970 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.551721 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:21.638219 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:21.668421 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:21.676505 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.956862 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:22.048176 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:22.138293 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:22.167065 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:22.175907 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:22.548247 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:22.637134 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:22.666761 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:22.678698 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.048314 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:23.137703 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:23.167379 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:23.176312 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.548227 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:23.637029 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:23.667023 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:23.675818 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.957234 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:24.048711 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:24.137654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:24.167287 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:24.176222 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:24.547890 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:24.637938 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:24.666441 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:24.676541 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:25.048950 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:25.137691 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:25.167039 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:25.176077 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:25.547659 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:25.638189 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:25.667322 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:25.676417 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:26.049053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:26.137885 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:26.166751 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:26.175632 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:26.457148 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:26.548791 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:26.638179 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:26.667010 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:26.675887 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:27.048468 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:27.137348 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:27.167014 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:27.175973 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:27.548313 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:27.637367 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:27.667109 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:27.676910 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:28.048207 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:28.137649 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:28.167498 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:28.175338 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:28.457365 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:28.547596 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:28.637231 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:28.666764 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:28.675810 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.034141 1118138 node_ready.go:49] node "addons-228070" has status "Ready":"True"
	I1024 19:25:29.034167 1118138 node_ready.go:38] duration metric: took 32.025981539s waiting for node "addons-228070" to be "Ready" ...
	I1024 19:25:29.034178 1118138 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:29.070286 1118138 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:29.077210 1118138 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:25:29.077240 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:29.139897 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:29.178330 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:29.257994 1118138 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:25:29.258057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.550173 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:29.675053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:29.692386 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.693026 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.051913 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:30.141201 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:30.168607 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.177431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:30.553394 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:30.641823 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:30.668842 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.681287 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.051298 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:31.144238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:31.144751 1118138 pod_ready.go:102] pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:31.167179 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:31.177693 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.552443 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:31.653610 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:31.665594 1118138 pod_ready.go:92] pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.665664 1118138 pod_ready.go:81] duration metric: took 2.595301442s waiting for pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.665701 1118138 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.686763 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.687850 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:31.691046 1118138 pod_ready.go:92] pod "etcd-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.691112 1118138 pod_ready.go:81] duration metric: took 25.377502ms waiting for pod "etcd-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.691141 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.710436 1118138 pod_ready.go:92] pod "kube-apiserver-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.710550 1118138 pod_ready.go:81] duration metric: took 19.389578ms waiting for pod "kube-apiserver-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.710591 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.724653 1118138 pod_ready.go:92] pod "kube-controller-manager-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.724720 1118138 pod_ready.go:81] duration metric: took 14.072134ms waiting for pod "kube-controller-manager-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.724762 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qtmf6" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.760154 1118138 pod_ready.go:92] pod "kube-proxy-qtmf6" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.760223 1118138 pod_ready.go:81] duration metric: took 35.436292ms waiting for pod "kube-proxy-qtmf6" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.760249 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.049047 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:32.137722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:32.158108 1118138 pod_ready.go:92] pod "kube-scheduler-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:32.158179 1118138 pod_ready.go:81] duration metric: took 397.910787ms waiting for pod "kube-scheduler-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.158206 1118138 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.167553 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:32.178380 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:32.558065 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:32.638024 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:32.667016 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:32.678177 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:33.052325 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:33.138035 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:33.167686 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:33.177289 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:33.550154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:33.637974 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:33.668029 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:33.677154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:34.050500 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:34.140019 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:34.168110 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:34.181352 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:34.465446 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:34.557029 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:34.639300 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:34.668777 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:34.683599 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:35.050712 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:35.138512 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:35.167761 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:35.176708 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:35.550798 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:35.639033 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:35.669657 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:35.678366 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.049964 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:36.137268 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:36.177056 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:36.179940 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.551160 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:36.639255 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:36.680308 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:36.681236 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.965514 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:37.051569 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:37.138061 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:37.167444 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:37.176119 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:37.551063 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:37.644213 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:37.669262 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:37.677693 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:38.050170 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:38.138875 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:38.168318 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:38.180176 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:38.552100 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:38.656573 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:38.667359 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:38.677447 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:39.050594 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:39.139541 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:39.167474 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:39.178265 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:39.465326 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:39.550816 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:39.638190 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:39.670307 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:39.682867 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:40.050650 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:40.138713 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:40.170891 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:40.179657 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:40.571117 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:40.638518 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:40.669795 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:40.680016 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:41.050792 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:41.137492 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:41.167497 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:41.176312 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:41.466037 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:41.550912 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:41.644696 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:41.671257 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:41.680933 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:42.051645 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:42.141342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:42.168900 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:42.178871 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:42.552892 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:42.640007 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:42.668826 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:42.678797 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.049671 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:43.138027 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:43.167060 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:43.176342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.549542 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:43.646671 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:43.672335 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:43.676907 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.965845 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:44.055937 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:44.141611 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:44.167507 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:44.176720 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:44.555481 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:44.639439 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:44.667736 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:44.679273 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.051963 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:45.138363 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:45.167583 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:45.177022 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.549472 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:45.639951 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:45.667136 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:45.676438 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.966843 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:46.049446 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:46.138589 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:46.167423 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:46.177640 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:46.552334 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:46.641522 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:46.668152 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:46.677325 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:47.050545 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:47.137210 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:47.169085 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:47.176822 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:47.552652 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:47.638357 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:47.667978 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:47.678636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:48.051116 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:48.138601 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:48.167897 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:48.184725 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:48.494839 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:48.549927 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:48.641395 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:48.666890 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:48.676382 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:49.049446 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:49.138091 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:49.167535 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:49.176279 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:49.465341 1118138 pod_ready.go:92] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:49.465367 1118138 pod_ready.go:81] duration metric: took 17.307141735s waiting for pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:49.465386 1118138 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:49.551150 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:49.637987 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:49.667749 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:49.678102 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:50.050654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:50.217730 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:50.218190 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:50.218933 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:50.554528 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:50.641361 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:50.682392 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:50.687631 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:51.049199 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:51.137704 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:51.168463 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:51.177677 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:51.492043 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:51.550906 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:51.638183 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:51.674434 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:51.691064 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:52.050435 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:52.137400 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:52.168438 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:52.177327 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:52.558848 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:52.638579 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:52.668773 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:52.702148 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.052836 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:53.138995 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:53.171292 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:53.178108 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.503293 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:53.549138 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:53.638175 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:53.691317 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.692499 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.049357 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:54.140636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:54.167499 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.177841 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:54.550143 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:54.637647 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:54.668031 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.687666 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.049335 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:55.138269 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:55.167596 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:55.176123 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.549971 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:55.637888 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:55.668877 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:55.677313 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.993500 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:56.050785 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:56.137574 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:56.167804 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:56.177973 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:56.554138 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:56.638676 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:56.668091 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:56.678702 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:57.057672 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:57.137823 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:57.168562 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:57.177433 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:57.549378 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:57.637987 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:57.667910 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:57.677985 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.052189 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:58.137981 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:58.167941 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:58.186954 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.492532 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:58.550297 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:58.640004 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:58.668206 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:58.677013 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.998562 1118138 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:58.998588 1118138 pod_ready.go:81] duration metric: took 9.533171772s waiting for pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:58.998610 1118138 pod_ready.go:38] duration metric: took 29.964420406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:58.998624 1118138 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:25:58.998648 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:25:58.998717 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:25:59.052962 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:59.074966 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:25:59.075025 1118138 cri.go:89] found id: ""
	I1024 19:25:59.075053 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:25:59.075141 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.080306 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:25:59.080419 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:25:59.138086 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:59.147258 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:25:59.147327 1118138 cri.go:89] found id: ""
	I1024 19:25:59.147349 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:25:59.147440 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.152769 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:25:59.152888 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:25:59.170363 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:59.181241 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:59.216958 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:25:59.217025 1118138 cri.go:89] found id: ""
	I1024 19:25:59.217046 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:25:59.217134 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.221525 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:25:59.221652 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:25:59.279165 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:25:59.279236 1118138 cri.go:89] found id: ""
	I1024 19:25:59.279258 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:25:59.279340 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.283834 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:25:59.283965 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:25:59.330306 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:25:59.330381 1118138 cri.go:89] found id: ""
	I1024 19:25:59.330403 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:25:59.330489 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.335139 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:25:59.335223 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:25:59.382901 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:25:59.382924 1118138 cri.go:89] found id: ""
	I1024 19:25:59.382932 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:25:59.382984 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.387577 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:25:59.387722 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:25:59.430049 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:25:59.430071 1118138 cri.go:89] found id: ""
	I1024 19:25:59.430079 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:25:59.430135 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.434826 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:25:59.434852 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 19:25:59.491345 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491576 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491757 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491955 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503120 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503320 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503501 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503702 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:25:59.529540 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:25:59.529577 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:25:59.550862 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:59.559693 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:25:59.559724 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:25:59.638211 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:59.667558 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:59.677238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:59.765370 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:25:59.765403 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:25:59.979611 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:25:59.979646 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:00.066268 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:00.107636 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:00.107717 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:00.168106 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:00.190902 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:00.202272 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:00.202304 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:00.206741 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:00.276553 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:00.276586 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:00.381895 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:00.381930 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:00.468984 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:00.469016 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:00.549873 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:00.549904 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:00.553722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:00.617058 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:00.617086 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:00.638071 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:00.668123 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:00.687662 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:00.763756 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:00.763834 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:00.763913 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:00.764079 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764131 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764162 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764200 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764251 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:00.764283 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:00.764303 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:01.050195 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:01.138458 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:01.174639 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:01.180073 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:01.549518 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:01.638730 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:01.667586 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:01.678196 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:02.051607 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:02.138205 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:02.168789 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:02.177346 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:02.557017 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:02.637668 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:02.667691 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:02.677112 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:03.049216 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:03.137923 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:03.167290 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:03.176769 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:03.549623 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:03.637651 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:03.667650 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:03.676267 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:04.049251 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:04.137633 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:04.167000 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:04.176172 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:04.548816 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:04.637406 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:04.672296 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:04.676604 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:05.050090 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:05.138285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:05.168493 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:05.177297 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:05.612910 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:05.677140 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:05.696655 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:05.715121 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:06.063301 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:06.137974 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:06.172318 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:06.179253 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:06.549930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:06.647456 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:06.687238 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:06.697510 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:07.056030 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:07.139266 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:07.170661 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:07.178407 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:07.558281 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:07.637732 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:07.669002 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:07.677661 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:08.051332 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:08.139445 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:08.169030 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:08.178874 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:08.560437 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:08.639245 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:08.670828 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:08.678293 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:09.049111 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:09.137814 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:09.186501 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:09.188941 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:09.549087 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:09.640480 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:09.667484 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:09.675964 1118138 kapi.go:107] duration metric: took 1m8.042430621s to wait for kubernetes.io/minikube-addons=registry ...
	I1024 19:26:10.049130 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:10.137790 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:10.173458 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:10.549883 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:10.637989 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:10.668363 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:10.765716 1118138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:26:10.816897 1118138 api_server.go:72] duration metric: took 1m15.047070166s to wait for apiserver process to appear ...
	I1024 19:26:10.816962 1118138 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:26:10.817005 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:26:10.817090 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:26:11.055549 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:11.141842 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:11.141904 1118138 cri.go:89] found id: ""
	I1024 19:26:11.141925 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:26:11.142018 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.145144 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:11.160794 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:26:11.160939 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:26:11.168138 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:11.286149 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:11.286212 1118138 cri.go:89] found id: ""
	I1024 19:26:11.286232 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:26:11.286318 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.296577 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:26:11.296692 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:26:11.492243 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:11.492322 1118138 cri.go:89] found id: ""
	I1024 19:26:11.492343 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:26:11.492421 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.504920 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:26:11.505038 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:26:11.556497 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:11.647936 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:11.668369 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:11.752616 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:11.752676 1118138 cri.go:89] found id: ""
	I1024 19:26:11.752697 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:26:11.752786 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.776044 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:26:11.776159 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:26:12.020780 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:12.020852 1118138 cri.go:89] found id: ""
	I1024 19:26:12.020875 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:26:12.020970 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.030743 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:26:12.030879 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:26:12.050844 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:12.138728 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:12.143821 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:12.143847 1118138 cri.go:89] found id: ""
	I1024 19:26:12.143856 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:26:12.143921 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.154391 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:26:12.154475 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:26:12.169119 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:12.227381 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:12.227467 1118138 cri.go:89] found id: ""
	I1024 19:26:12.227498 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:26:12.227593 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.233283 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:26:12.233344 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 19:26:12.299702 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300024 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300261 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300548 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.316705 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.316995 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.317238 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.317466 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:12.349499 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:26:12.349558 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:26:12.536005 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:26:12.536039 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:12.552144 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:12.651367 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:12.662204 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:26:12.662244 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:12.669134 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:12.743017 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:12.743049 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:12.805536 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:12.805565 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:12.910481 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:12.910516 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:13.020698 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:26:13.020735 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:26:13.050439 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:13.059802 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:13.059834 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:13.139237 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:13.145344 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:13.145373 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:13.168845 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:13.217492 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:13.217523 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:13.354257 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:13.354332 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:13.436742 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:13.436817 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:13.436894 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:13.437076 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437092 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437107 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437115 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437128 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:13.437139 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:13.437148 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:13.570482 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:13.640894 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:13.667635 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:14.050030 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:14.138793 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:14.167197 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:14.552636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:14.637564 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:14.667314 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:15.050294 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:15.139741 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:15.167845 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:15.568480 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:15.638145 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:15.667482 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:16.050194 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:16.138359 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:16.169449 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:16.555477 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:16.643667 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:16.668784 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:17.050746 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:17.138264 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:17.180461 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:17.555281 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:17.638600 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:17.674475 1118138 kapi.go:107] duration metric: took 1m16.045670856s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:26:18.050470 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:18.143035 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:18.549700 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:18.637296 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:19.049286 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:19.137729 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:19.549053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:19.638300 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:20.049961 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:20.137901 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:20.550121 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:20.637930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:21.050026 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:21.138141 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:21.555123 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:21.638944 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:22.052768 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:22.140841 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:22.550359 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:22.637630 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.049206 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:23.139011 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.438818 1118138 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:26:23.448034 1118138 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:26:23.451089 1118138 api_server.go:141] control plane version: v1.28.3
	I1024 19:26:23.451369 1118138 api_server.go:131] duration metric: took 12.634384269s to wait for apiserver health ...
	I1024 19:26:23.451404 1118138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:26:23.451454 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:26:23.451548 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:26:23.525582 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:23.525664 1118138 cri.go:89] found id: ""
	I1024 19:26:23.525697 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:26:23.526058 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.537203 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:26:23.537320 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:26:23.560449 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:23.610786 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:23.610846 1118138 cri.go:89] found id: ""
	I1024 19:26:23.610878 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:26:23.610962 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.625489 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:26:23.625614 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:26:23.643239 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.681799 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:23.681869 1118138 cri.go:89] found id: ""
	I1024 19:26:23.681891 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:26:23.681978 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.691256 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:26:23.691345 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:26:23.768281 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:23.768304 1118138 cri.go:89] found id: ""
	I1024 19:26:23.768321 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:26:23.768377 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.774094 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:26:23.774174 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:26:23.848548 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:23.848572 1118138 cri.go:89] found id: ""
	I1024 19:26:23.848581 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:26:23.848652 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.853358 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:26:23.853474 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:26:23.913764 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:23.913835 1118138 cri.go:89] found id: ""
	I1024 19:26:23.913857 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:26:23.913940 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.920118 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:26:23.920228 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:26:23.979651 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:23.979720 1118138 cri.go:89] found id: ""
	I1024 19:26:23.979744 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:26:23.979826 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.984483 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:26:23.984547 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:24.061929 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:26:24.062010 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:24.077078 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:24.141274 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:24.150373 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:24.150443 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:24.205385 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:24.205461 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:24.264010 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:24.264085 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:24.362394 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:24.362465 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:24.496338 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:24.496452 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:24.553677 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:24.602570 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:26:24.602641 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:26:24.638442 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1024 19:26:24.666148 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666417 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666624 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666849 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679205 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679472 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679681 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679906 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:24.713833 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:26:24.713971 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:26:24.749828 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:26:24.749898 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:26:24.899701 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:24.899735 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:24.960721 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:24.960752 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:25.007321 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:25.007349 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:25.007397 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:25.007410 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007418 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007426 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007436 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007445 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:25.007457 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:25.007463 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:25.049301 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:25.137868 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:25.549167 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:25.637936 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:26.062169 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:26.137595 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:26.551683 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:26.638173 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:27.049788 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:27.137722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:27.549779 1118138 kapi.go:107] duration metric: took 1m25.545904341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1024 19:26:27.639609 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:28.137444 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:28.637332 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:29.137862 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:29.638285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:30.137393 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:30.638011 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:31.142884 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:31.638395 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:32.137548 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:32.637680 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:33.137147 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:33.637085 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:34.137821 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:34.637284 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:35.018340 1118138 system_pods.go:59] 18 kube-system pods found
	I1024 19:26:35.018374 1118138 system_pods.go:61] "coredns-5dd5756b68-fhbrz" [ab8c6257-b394-452b-ad47-175c0704f944] Running
	I1024 19:26:35.018381 1118138 system_pods.go:61] "csi-hostpath-attacher-0" [7a7d8dc2-251a-4db2-a8d3-c61c74797d8f] Running
	I1024 19:26:35.018386 1118138 system_pods.go:61] "csi-hostpath-resizer-0" [6def26fa-40e8-47c4-8680-9528c7339358] Running
	I1024 19:26:35.018391 1118138 system_pods.go:61] "csi-hostpathplugin-zsvq4" [e00a413f-7ea8-45e5-80c6-d3f052fa7b96] Running
	I1024 19:26:35.018396 1118138 system_pods.go:61] "etcd-addons-228070" [64901b37-c071-45df-9df3-c16aabf42b04] Running
	I1024 19:26:35.018401 1118138 system_pods.go:61] "kindnet-zpk2b" [cd7fe14a-6160-4d8f-a555-181f7ffe8365] Running
	I1024 19:26:35.018406 1118138 system_pods.go:61] "kube-apiserver-addons-228070" [36b4d137-4039-4168-9c0a-3cc996475f57] Running
	I1024 19:26:35.018412 1118138 system_pods.go:61] "kube-controller-manager-addons-228070" [b7613bd0-63e4-453f-b73c-455e101f0cbf] Running
	I1024 19:26:35.018422 1118138 system_pods.go:61] "kube-ingress-dns-minikube" [f748865c-b605-4237-9edf-8387e9925319] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1024 19:26:35.018434 1118138 system_pods.go:61] "kube-proxy-qtmf6" [abf30c53-c321-472b-b6ae-08df96a309bd] Running
	I1024 19:26:35.018440 1118138 system_pods.go:61] "kube-scheduler-addons-228070" [15009577-04f0-4752-8104-ce67e82cb40d] Running
	I1024 19:26:35.018446 1118138 system_pods.go:61] "metrics-server-7c66d45ddc-fgmf7" [de24d5b2-08eb-4c8a-9c9b-3d6eb76712d8] Running
	I1024 19:26:35.018451 1118138 system_pods.go:61] "nvidia-device-plugin-daemonset-vnscp" [638ff2b2-e718-4d5a-aa20-ab6d29a35186] Running
	I1024 19:26:35.018456 1118138 system_pods.go:61] "registry-chlmt" [1869d1d7-07f4-4d9c-94d6-4bcc1e8efe3a] Running
	I1024 19:26:35.018465 1118138 system_pods.go:61] "registry-proxy-xdq2s" [16223b37-cd2a-41d2-8ebd-ee2c4fcef1a2] Running
	I1024 19:26:35.018470 1118138 system_pods.go:61] "snapshot-controller-58dbcc7b99-nrnxv" [d6325577-d6ec-4198-9f67-6baaf5e960b0] Running
	I1024 19:26:35.018476 1118138 system_pods.go:61] "snapshot-controller-58dbcc7b99-v2jmr" [75c26e55-e64d-4021-8768-3e849b1ca7b5] Running
	I1024 19:26:35.018484 1118138 system_pods.go:61] "storage-provisioner" [4f736afb-13f3-46ab-bfab-0369c68cd496] Running
	I1024 19:26:35.018489 1118138 system_pods.go:74] duration metric: took 11.567067626s to wait for pod list to return data ...
	I1024 19:26:35.018502 1118138 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:26:35.021469 1118138 default_sa.go:45] found service account: "default"
	I1024 19:26:35.021498 1118138 default_sa.go:55] duration metric: took 2.988887ms for default service account to be created ...
	I1024 19:26:35.021509 1118138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:26:35.032057 1118138 system_pods.go:86] 18 kube-system pods found
	I1024 19:26:35.032096 1118138 system_pods.go:89] "coredns-5dd5756b68-fhbrz" [ab8c6257-b394-452b-ad47-175c0704f944] Running
	I1024 19:26:35.032104 1118138 system_pods.go:89] "csi-hostpath-attacher-0" [7a7d8dc2-251a-4db2-a8d3-c61c74797d8f] Running
	I1024 19:26:35.032109 1118138 system_pods.go:89] "csi-hostpath-resizer-0" [6def26fa-40e8-47c4-8680-9528c7339358] Running
	I1024 19:26:35.032114 1118138 system_pods.go:89] "csi-hostpathplugin-zsvq4" [e00a413f-7ea8-45e5-80c6-d3f052fa7b96] Running
	I1024 19:26:35.032120 1118138 system_pods.go:89] "etcd-addons-228070" [64901b37-c071-45df-9df3-c16aabf42b04] Running
	I1024 19:26:35.032125 1118138 system_pods.go:89] "kindnet-zpk2b" [cd7fe14a-6160-4d8f-a555-181f7ffe8365] Running
	I1024 19:26:35.032130 1118138 system_pods.go:89] "kube-apiserver-addons-228070" [36b4d137-4039-4168-9c0a-3cc996475f57] Running
	I1024 19:26:35.032137 1118138 system_pods.go:89] "kube-controller-manager-addons-228070" [b7613bd0-63e4-453f-b73c-455e101f0cbf] Running
	I1024 19:26:35.032145 1118138 system_pods.go:89] "kube-ingress-dns-minikube" [f748865c-b605-4237-9edf-8387e9925319] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1024 19:26:35.032152 1118138 system_pods.go:89] "kube-proxy-qtmf6" [abf30c53-c321-472b-b6ae-08df96a309bd] Running
	I1024 19:26:35.032164 1118138 system_pods.go:89] "kube-scheduler-addons-228070" [15009577-04f0-4752-8104-ce67e82cb40d] Running
	I1024 19:26:35.032171 1118138 system_pods.go:89] "metrics-server-7c66d45ddc-fgmf7" [de24d5b2-08eb-4c8a-9c9b-3d6eb76712d8] Running
	I1024 19:26:35.032179 1118138 system_pods.go:89] "nvidia-device-plugin-daemonset-vnscp" [638ff2b2-e718-4d5a-aa20-ab6d29a35186] Running
	I1024 19:26:35.032184 1118138 system_pods.go:89] "registry-chlmt" [1869d1d7-07f4-4d9c-94d6-4bcc1e8efe3a] Running
	I1024 19:26:35.032190 1118138 system_pods.go:89] "registry-proxy-xdq2s" [16223b37-cd2a-41d2-8ebd-ee2c4fcef1a2] Running
	I1024 19:26:35.032196 1118138 system_pods.go:89] "snapshot-controller-58dbcc7b99-nrnxv" [d6325577-d6ec-4198-9f67-6baaf5e960b0] Running
	I1024 19:26:35.032203 1118138 system_pods.go:89] "snapshot-controller-58dbcc7b99-v2jmr" [75c26e55-e64d-4021-8768-3e849b1ca7b5] Running
	I1024 19:26:35.032208 1118138 system_pods.go:89] "storage-provisioner" [4f736afb-13f3-46ab-bfab-0369c68cd496] Running
	I1024 19:26:35.032215 1118138 system_pods.go:126] duration metric: took 10.701184ms to wait for k8s-apps to be running ...
	I1024 19:26:35.032226 1118138 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:26:35.032288 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:26:35.046474 1118138 system_svc.go:56] duration metric: took 14.237908ms WaitForService to wait for kubelet.
	I1024 19:26:35.046500 1118138 kubeadm.go:581] duration metric: took 1m39.276680332s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:26:35.046521 1118138 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:26:35.049848 1118138 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:26:35.049879 1118138 node_conditions.go:123] node cpu capacity is 2
	I1024 19:26:35.049890 1118138 node_conditions.go:105] duration metric: took 3.364319ms to run NodePressure ...
	I1024 19:26:35.049900 1118138 start.go:228] waiting for startup goroutines ...
	I1024 19:26:35.138012 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:35.637336 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:36.138431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:36.638383 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:37.137407 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:37.638106 1118138 kapi.go:107] duration metric: took 1m33.01485004s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1024 19:26:37.640421 1118138 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-228070 cluster.
	I1024 19:26:37.642305 1118138 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1024 19:26:37.644087 1118138 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1024 19:26:37.646214 1118138 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1024 19:26:37.648060 1118138 addons.go:502] enable addons completed in 1m42.132215074s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1024 19:26:37.648135 1118138 start.go:233] waiting for cluster config update ...
	I1024 19:26:37.648161 1118138 start.go:242] writing updated cluster config ...
	I1024 19:26:37.648546 1118138 ssh_runner.go:195] Run: rm -f paused
	I1024 19:26:37.714003 1118138 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:26:37.716294 1118138 out.go:177] * Done! kubectl is now configured to use "addons-228070" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:34:00 addons-228070 crio[888]: time="2023-10-24 19:34:00.601320932Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2ee27576-bc87-4755-9c12-8dd2ce84682c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:00 addons-228070 crio[888]: time="2023-10-24 19:34:00.602461046Z" level=info msg="Pulling image: docker.io/nginx:latest" id=9d4300da-97c3-4754-968f-ca9f4f48c663 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:34:00 addons-228070 crio[888]: time="2023-10-24 19:34:00.604520409Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 24 19:34:06 addons-228070 crio[888]: time="2023-10-24 19:34:06.602301498Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c5a2e656-0ba9-426c-bf8f-b638ea777abf name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:06 addons-228070 crio[888]: time="2023-10-24 19:34:06.602523774Z" level=info msg="Image docker.io/nginx:alpine not found" id=c5a2e656-0ba9-426c-bf8f-b638ea777abf name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:18 addons-228070 crio[888]: time="2023-10-24 19:34:18.601512598Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=2a220d7f-536d-4e25-a133-81fe4c1a45de name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:18 addons-228070 crio[888]: time="2023-10-24 19:34:18.602109970Z" level=info msg="Image docker.io/nginx:alpine not found" id=2a220d7f-536d-4e25-a133-81fe4c1a45de name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:33 addons-228070 crio[888]: time="2023-10-24 19:34:33.601545660Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b6a3304e-ed8f-4b08-81f5-6434824a322a name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:33 addons-228070 crio[888]: time="2023-10-24 19:34:33.601800157Z" level=info msg="Image docker.io/nginx:alpine not found" id=b6a3304e-ed8f-4b08-81f5-6434824a322a name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:41 addons-228070 crio[888]: time="2023-10-24 19:34:41.601194139Z" level=info msg="Checking image status: docker.io/nginx:latest" id=b6998411-1669-4f32-813d-e69711724a85 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:41 addons-228070 crio[888]: time="2023-10-24 19:34:41.601411640Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=b6998411-1669-4f32-813d-e69711724a85 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:42 addons-228070 crio[888]: time="2023-10-24 19:34:42.622115995Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=9f9f1c06-a346-41f2-aeb6-3ac1f5f3d720 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:42 addons-228070 crio[888]: time="2023-10-24 19:34:42.622340437Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=9f9f1c06-a346-41f2-aeb6-3ac1f5f3d720 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:47 addons-228070 crio[888]: time="2023-10-24 19:34:47.601180760Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ef2f0be9-ae54-435d-9ed0-4dc38cb83cec name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:47 addons-228070 crio[888]: time="2023-10-24 19:34:47.601405825Z" level=info msg="Image docker.io/nginx:alpine not found" id=ef2f0be9-ae54-435d-9ed0-4dc38cb83cec name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:55 addons-228070 crio[888]: time="2023-10-24 19:34:55.604163116Z" level=info msg="Checking image status: docker.io/nginx:latest" id=2e272f44-4b67-4cc5-aace-b37ba492b702 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:34:55 addons-228070 crio[888]: time="2023-10-24 19:34:55.604393835Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2e272f44-4b67-4cc5-aace-b37ba492b702 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:02 addons-228070 crio[888]: time="2023-10-24 19:35:02.602690849Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=597b107a-dd01-4ca4-9e6b-816d185fdcb7 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:02 addons-228070 crio[888]: time="2023-10-24 19:35:02.602915899Z" level=info msg="Image docker.io/nginx:alpine not found" id=597b107a-dd01-4ca4-9e6b-816d185fdcb7 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:10 addons-228070 crio[888]: time="2023-10-24 19:35:10.601605631Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d307609b-4e9c-4f6d-83c9-cf4c4b7db598 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:10 addons-228070 crio[888]: time="2023-10-24 19:35:10.601866323Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d307609b-4e9c-4f6d-83c9-cf4c4b7db598 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:16 addons-228070 crio[888]: time="2023-10-24 19:35:16.601400993Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7f51afa9-a72b-4c11-a4ef-9f6524eddda1 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:16 addons-228070 crio[888]: time="2023-10-24 19:35:16.601624803Z" level=info msg="Image docker.io/nginx:alpine not found" id=7f51afa9-a72b-4c11-a4ef-9f6524eddda1 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:25 addons-228070 crio[888]: time="2023-10-24 19:35:25.605956709Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d7a8526a-5b48-4335-aa20-fc0b549ea669 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:35:25 addons-228070 crio[888]: time="2023-10-24 19:35:25.606185016Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d7a8526a-5b48-4335-aa20-fc0b549ea669 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	23e68bc94b71f       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             4 minutes ago       Exited              minikube-ingress-dns                     6                   f7308c9290220       kube-ingress-dns-minikube
	a29c55d246070       ghcr.io/headlamp-k8s/headlamp@sha256:8e813897da00c345b1169d624b32e2367e5da1dbbffe33226f8a92973b816b50                                        8 minutes ago       Running             headlamp                                 0                   b8c4ed44d3106       headlamp-94b766c-tn68w
	90aa35ebcef96       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 8 minutes ago       Running             gcp-auth                                 0                   88060deb243cb       gcp-auth-d4c87556c-gq5sh
	f12c66d58fcbf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          8 minutes ago       Running             csi-snapshotter                          0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	1bd865277c8cc       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          9 minutes ago       Running             csi-provisioner                          0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	bc22eb3c5d3e6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            9 minutes ago       Running             liveness-probe                           0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	f2d9cd1cbda4c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           9 minutes ago       Running             hostpath                                 0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	4ff4759f93747       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                9 minutes ago       Running             node-driver-registrar                    0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	fe77220d409eb       registry.k8s.io/ingress-nginx/controller@sha256:79e6b8cb9a4e9cfad53862c2aa3e98b8281cc353908517a5e636a531ad331d7c                             9 minutes ago       Running             controller                               0                   3e683cbae41c7       ingress-nginx-controller-6f48fc54bd-cskmn
	388560cb3eb8b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   9 minutes ago       Exited              patch                                    0                   6f76a7cb3d750       ingress-nginx-admission-patch-ht52w
	d0df4f62fd906       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   9 minutes ago       Running             csi-external-health-monitor-controller   0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	2b7d2d1f38e08       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      9 minutes ago       Running             volume-snapshot-controller               0                   3b32fcf319335       snapshot-controller-58dbcc7b99-nrnxv
	a5a901295ccaf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   9 minutes ago       Exited              create                                   0                   828972d42ca5b       ingress-nginx-admission-create-grpcs
	39576799cd883       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              9 minutes ago       Running             csi-resizer                              0                   6ac9a0bb61fe8       csi-hostpath-resizer-0
	c9bb64f813447       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      9 minutes ago       Running             volume-snapshot-controller               0                   143f92bafe19f       snapshot-controller-58dbcc7b99-v2jmr
	991a2a6d18e6a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             9 minutes ago       Running             local-path-provisioner                   0                   937b5ac4e09c1       local-path-provisioner-78b46b4d5c-n4dx9
	9a4f5374f2806       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             9 minutes ago       Running             csi-attacher                             0                   2b9da20e83cf4       csi-hostpath-attacher-0
	e7593a21d5782       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             9 minutes ago       Running             storage-provisioner                      0                   406b51454fe54       storage-provisioner
	ed04655a9a89b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             9 minutes ago       Running             coredns                                  0                   6fe415cbd9cc0       coredns-5dd5756b68-fhbrz
	05a4fa5dbaf4c       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             10 minutes ago      Running             kindnet-cni                              0                   58dbd906b345f       kindnet-zpk2b
	a568bb4094016       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                                             10 minutes ago      Running             kube-proxy                               0                   72163d1a17079       kube-proxy-qtmf6
	af521e6bd1f01       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                                             10 minutes ago      Running             kube-apiserver                           0                   ffad0e29af7ce       kube-apiserver-addons-228070
	837e6ec9f669d       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                                             10 minutes ago      Running             kube-controller-manager                  0                   80c27b65764d7       kube-controller-manager-addons-228070
	30760e9bfa89f       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                                             10 minutes ago      Running             kube-scheduler                           0                   8174ff04bb59c       kube-scheduler-addons-228070
	ba7f1603e1423       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             10 minutes ago      Running             etcd                                     0                   054f41ae6c499       etcd-addons-228070
	
	* 
	* ==> coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] <==
	* [INFO] 10.244.0.13:36388 - 41647 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002260635s
	[INFO] 10.244.0.13:42282 - 63330 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110318s
	[INFO] 10.244.0.13:42282 - 62556 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152926s
	[INFO] 10.244.0.13:33118 - 18552 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100381s
	[INFO] 10.244.0.13:33118 - 28276 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000061432s
	[INFO] 10.244.0.13:57352 - 22278 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059585s
	[INFO] 10.244.0.13:57352 - 14595 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036439s
	[INFO] 10.244.0.13:58946 - 47308 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103876s
	[INFO] 10.244.0.13:58946 - 38094 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112352s
	[INFO] 10.244.0.13:35116 - 20549 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001406862s
	[INFO] 10.244.0.13:35116 - 9339 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001440331s
	[INFO] 10.244.0.13:47372 - 59052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006112s
	[INFO] 10.244.0.13:47372 - 61870 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111811s
	[INFO] 10.244.0.19:57549 - 49706 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000282591s
	[INFO] 10.244.0.19:43958 - 25922 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000408975s
	[INFO] 10.244.0.19:40902 - 54517 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000221571s
	[INFO] 10.244.0.19:58432 - 38005 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178871s
	[INFO] 10.244.0.19:56734 - 27703 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000223269s
	[INFO] 10.244.0.19:36669 - 29902 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000266805s
	[INFO] 10.244.0.19:50379 - 5908 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002136065s
	[INFO] 10.244.0.19:57043 - 47835 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002451666s
	[INFO] 10.244.0.19:41005 - 41696 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000791734s
	[INFO] 10.244.0.19:42052 - 61210 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001354005s
	[INFO] 10.244.0.21:37151 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000200089s
	[INFO] 10.244.0.21:44132 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131248s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-228070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-228070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=addons-228070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_24_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-228070
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-228070"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-228070
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:35:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-228070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 bff6bc2f0e7246fcb1d863c8f524e2a6
	  System UUID:                68df10e7-4bae-46a3-a993-9195f34a2cb5
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m57s
	  gcp-auth                    gcp-auth-d4c87556c-gq5sh                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  headlamp                    headlamp-94b766c-tn68w                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  ingress-nginx               ingress-nginx-controller-6f48fc54bd-cskmn    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-fhbrz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     10m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 csi-hostpathplugin-zsvq4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	  kube-system                 etcd-addons-228070                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-zpk2b                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      10m
	  kube-system                 kube-apiserver-addons-228070                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-addons-228070        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-qtmf6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-addons-228070                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-58dbcc7b99-nrnxv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 snapshot-controller-58dbcc7b99-v2jmr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  local-path-storage          local-path-provisioner-78b46b4d5c-n4dx9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             310Mi (3%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-228070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-228070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node addons-228070 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-228070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-228070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-228070 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node addons-228070 event: Registered Node addons-228070 in Controller
	  Normal  NodeReady                9m58s              kubelet          Node addons-228070 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001113] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001085] FS-Cache: N-key=[8] '80623b0000000000'
	[  +0.002635] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000bf36fe5e
	[  +0.001181] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000b7ed4e62
	[  +0.001156] FS-Cache: N-key=[8] '80623b0000000000'
	[  +3.138037] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000984] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a1cd37ca
	[  +0.001134] FS-Cache: O-key=[8] '7f623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001008] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001075] FS-Cache: N-key=[8] '7f623b0000000000'
	[  +0.302369] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001049] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001121] FS-Cache: O-key=[8] '85623b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c7864bf1
	[  +0.001088] FS-Cache: N-key=[8] '85623b0000000000'
	
	* 
	* ==> etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] <==
	* {"level":"info","ts":"2023-10-24T19:24:36.222604Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:24:36.677777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.677901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.677951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.67802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.681881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.685941Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-228070 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:24:36.686015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:36.687102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:24:36.687307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:36.688178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-24T19:24:36.688652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:36.688715Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:36.688924Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.689045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.689095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:59.431665Z","caller":"traceutil/trace.go:171","msg":"trace[734719481] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"112.299376ms","start":"2023-10-24T19:24:59.319352Z","end":"2023-10-24T19:24:59.431651Z","steps":["trace[734719481] 'process raft request'  (duration: 111.994294ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:24:59.475254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.535055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-24T19:24:59.50079Z","caller":"traceutil/trace.go:171","msg":"trace[1394886172] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:433; }","duration":"135.080754ms","start":"2023-10-24T19:24:59.365694Z","end":"2023-10-24T19:24:59.500774Z","steps":["trace[1394886172] 'agreement among raft nodes before linearized reading'  (duration: 108.609142ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-24T19:34:36.843109Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1756}
	{"level":"info","ts":"2023-10-24T19:34:36.870751Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1756,"took":"27.149426ms","hash":4007452805}
	{"level":"info","ts":"2023-10-24T19:34:36.870809Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4007452805,"revision":1756,"compact-revision":-1}
	
	* 
	* ==> gcp-auth [90aa35ebcef96ff423d95c233ad9c62b62805d0ca3c4783478577ca8337fc4b5] <==
	* 2023/10/24 19:26:36 GCP Auth Webhook started!
	2023/10/24 19:26:44 Ready to marshal response ...
	2023/10/24 19:26:44 Ready to write response ...
	2023/10/24 19:26:44 Ready to marshal response ...
	2023/10/24 19:26:44 Ready to write response ...
	2023/10/24 19:26:48 Ready to marshal response ...
	2023/10/24 19:26:48 Ready to write response ...
	2023/10/24 19:26:53 Ready to marshal response ...
	2023/10/24 19:26:53 Ready to write response ...
	2023/10/24 19:26:54 Ready to marshal response ...
	2023/10/24 19:26:54 Ready to write response ...
	2023/10/24 19:26:55 Ready to marshal response ...
	2023/10/24 19:26:55 Ready to write response ...
	2023/10/24 19:26:55 Ready to marshal response ...
	2023/10/24 19:26:55 Ready to write response ...
	2023/10/24 19:26:59 Ready to marshal response ...
	2023/10/24 19:26:59 Ready to write response ...
	2023/10/24 19:27:24 Ready to marshal response ...
	2023/10/24 19:27:24 Ready to write response ...
	2023/10/24 19:27:29 Ready to marshal response ...
	2023/10/24 19:27:29 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:35:26 up  9:17,  0 users,  load average: 0.22, 0.56, 1.32
	Linux addons-228070 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] <==
	* I1024 19:33:18.786395       1 main.go:227] handling current node
	I1024 19:33:28.795993       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:28.796022       1 main.go:227] handling current node
	I1024 19:33:38.807764       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:38.807789       1 main.go:227] handling current node
	I1024 19:33:48.820150       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:48.820180       1 main.go:227] handling current node
	I1024 19:33:58.824190       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:58.824216       1 main.go:227] handling current node
	I1024 19:34:08.836106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:08.836133       1 main.go:227] handling current node
	I1024 19:34:18.845370       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:18.845401       1 main.go:227] handling current node
	I1024 19:34:28.857371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:28.857400       1 main.go:227] handling current node
	I1024 19:34:38.861519       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:38.861546       1 main.go:227] handling current node
	I1024 19:34:48.870660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:48.870689       1 main.go:227] handling current node
	I1024 19:34:58.881452       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:34:58.881479       1 main.go:227] handling current node
	I1024 19:35:08.893472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:35:08.893502       1 main.go:227] handling current node
	I1024 19:35:18.901561       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:35:18.901596       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] <==
	* E1024 19:25:49.281860       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1024 19:25:49.282834       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.9.255:443: connect: connection refused
	I1024 19:25:49.283049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:25:49.357022       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:26:38.935475       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1024 19:26:52.640193       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400be179b0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b2b2eb0), ResponseWriter:(*httpsnoop.rw)(0x400b2b2eb0), Flusher:(*httpsnoop.rw)(0x400b2b2eb0), CloseNotifier:(*httpsnoop.rw)(0x400b2b2eb0), Pusher:(*httpsnoop.rw)(0x400b2b2eb0)}}, encoder:(*versioning.codec)(0x4008eb6320), memAllocator:(*runtime.Allocator)(0x400871e468)})
	I1024 19:26:54.990169       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.144.179"}
	I1024 19:27:10.881818       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1024 19:27:11.794596       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1024 19:27:11.847875       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1024 19:27:12.884200       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1024 19:27:23.801095       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1024 19:27:24.185066       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.143.9"}
	I1024 19:27:50.313807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1024 19:29:39.125353       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:29:39.125425       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:29:39.125874       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:29:39.125922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:34:39.125331       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:34:39.125400       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:34:39.125689       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:34:39.125796       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:34:39.126298       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:34:39.126391       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] <==
	* I1024 19:27:25.038869       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1024 19:27:25.038915       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:27:29.205415       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1024 19:27:30.187448       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:27:30.187483       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:27:47.384447       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:27:47.384487       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:28:38.323143       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:28:38.323179       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:29:25.452238       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:29:25.452273       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:30:13.887450       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:30:13.887570       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:30:56.680973       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:30:56.681010       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:31:30.478011       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:31:30.478048       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:32:17.414627       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:32:17.414659       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:33:04.983264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:33:04.983300       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:33:55.101843       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:33:55.101879       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:34:54.800682       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:34:54.800715       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] <==
	* I1024 19:25:00.573856       1 server_others.go:69] "Using iptables proxy"
	I1024 19:25:00.903063       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1024 19:25:01.085905       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:25:01.089064       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:25:01.089177       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:25:01.089222       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:25:01.089307       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:25:01.089588       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:25:01.089868       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:25:01.091536       1 config.go:188] "Starting service config controller"
	I1024 19:25:01.091646       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:25:01.091708       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:25:01.091737       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:25:01.092374       1 config.go:315] "Starting node config controller"
	I1024 19:25:01.092434       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:25:01.192530       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:25:01.205129       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:25:01.205244       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] <==
	* I1024 19:24:39.725726       1 serving.go:348] Generated self-signed cert in-memory
	I1024 19:24:40.738906       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:24:40.739028       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:24:40.743446       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1024 19:24:40.743563       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1024 19:24:40.743682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:24:40.743730       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:24:40.743776       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1024 19:24:40.743823       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:24:40.744115       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:24:40.744180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:24:40.844099       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:24:40.844115       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1024 19:24:40.844139       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.780682    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a83840cc303ba9f0a89ea3f36d2079630a030d7d5798db64ced95170fc3aeb2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a83840cc303ba9f0a89ea3f36d2079630a030d7d5798db64ced95170fc3aeb2/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.784988    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/433766df0ce0b6b0e636223f18d8a28d76751c21e3c048a3b945c590c49b3a0d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/433766df0ce0b6b0e636223f18d8a28d76751c21e3c048a3b945c590c49b3a0d/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.785145    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/31bd620987af58ef82c3d330c536c5110c894619addc084614b4284036f3aadf/diff" to get inode usage: stat /var/lib/containers/storage/overlay/31bd620987af58ef82c3d330c536c5110c894619addc084614b4284036f3aadf/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.787189    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c80d1ee68778bfa263db5bbcf826c7a46d8f0f9d06616c396ef4dfb704807255/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c80d1ee68778bfa263db5bbcf826c7a46d8f0f9d06616c396ef4dfb704807255/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.789354    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cf7a3bd98ce6233abf40d5ac03bc365973d23b98665840f501d32556ce7a3343/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cf7a3bd98ce6233abf40d5ac03bc365973d23b98665840f501d32556ce7a3343/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.795100    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cac1ccb56e540cdc6b9991d02b3a6cf64353d241a2362762c1e80782f1383771/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cac1ccb56e540cdc6b9991d02b3a6cf64353d241a2362762c1e80782f1383771/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.795113    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/821d4f21672e26b575c4e87c6130cc6fdeed5216f71610518c2ff6696da3c8b5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/821d4f21672e26b575c4e87c6130cc6fdeed5216f71610518c2ff6696da3c8b5/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.798720    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c3727c0a805f62335f59b9d2794eee39326378b291ec537641f02b93953f9440/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c3727c0a805f62335f59b9d2794eee39326378b291ec537641f02b93953f9440/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.798737    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a83840cc303ba9f0a89ea3f36d2079630a030d7d5798db64ced95170fc3aeb2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a83840cc303ba9f0a89ea3f36d2079630a030d7d5798db64ced95170fc3aeb2/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.800930    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/169aca2960cce320f539aced5304e73bd1d75b04351c75d6d2de24010c14f3d2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/169aca2960cce320f539aced5304e73bd1d75b04351c75d6d2de24010c14f3d2/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:42 addons-228070 kubelet[1357]: E1024 19:34:42.804325    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/01fb257adf1e97456d353ba368b515caeb516576ae6e99be3128bd14c3195274/diff" to get inode usage: stat /var/lib/containers/storage/overlay/01fb257adf1e97456d353ba368b515caeb516576ae6e99be3128bd14c3195274/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:34:46 addons-228070 kubelet[1357]: I1024 19:34:46.600770    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:34:46 addons-228070 kubelet[1357]: E1024 19:34:46.601044    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:34:47 addons-228070 kubelet[1357]: E1024 19:34:47.601876    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:34:55 addons-228070 kubelet[1357]: E1024 19:34:55.606008    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:34:59 addons-228070 kubelet[1357]: I1024 19:34:59.600873    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:34:59 addons-228070 kubelet[1357]: E1024 19:34:59.601154    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:35:02 addons-228070 kubelet[1357]: E1024 19:35:02.603121    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:35:10 addons-228070 kubelet[1357]: I1024 19:35:10.601100    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:35:10 addons-228070 kubelet[1357]: E1024 19:35:10.601356    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:35:10 addons-228070 kubelet[1357]: E1024 19:35:10.602113    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:35:16 addons-228070 kubelet[1357]: E1024 19:35:16.601842    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:35:23 addons-228070 kubelet[1357]: I1024 19:35:23.600623    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:35:23 addons-228070 kubelet[1357]: E1024 19:35:23.600920    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:35:25 addons-228070 kubelet[1357]: E1024 19:35:25.606787    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	
	* 
	* ==> storage-provisioner [e7593a21d5782f56f260f444db0a975ea4862e38b2e9fa85e828961fc60380b4] <==
	* I1024 19:25:29.917672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:25:29.941662       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:25:29.941842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:25:29.948500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:25:29.948749       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28!
	I1024 19:25:29.949655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5125c969-42f3-4486-b969-fc535e305358", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28 became leader
	I1024 19:25:30.049844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-228070 -n addons-228070
helpers_test.go:261: (dbg) Run:  kubectl --context addons-228070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w: exit status 1 (139.014621ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-228070/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:27:24 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m8lx7 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-m8lx7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-228070
	  Warning  Failed     7m32s                  kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m17s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m30s (x4 over 8m3s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m46s (x4 over 7m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     3m46s (x2 over 5m17s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m32s (x6 over 7m32s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m54s (x9 over 7m32s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-228070/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:27:29 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjt45 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-rjt45:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  7m57s                  default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-228070
	  Warning  Failed     4m16s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m26s (x4 over 7m58s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m56s (x3 over 7m2s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m56s (x4 over 7m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m29s (x7 over 7m1s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m29s (x7 over 7m1s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-grpcs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ht52w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w: exit status 1
--- FAIL: TestAddons/parallel/Ingress (484.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (394.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 12.755739ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-228070 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-228070 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [79749512-5af0-41ff-bc78-491c64dea4fc] Pending
helpers_test.go:344: "task-pv-pod" [79749512-5af0-41ff-bc78-491c64dea4fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [79749512-5af0-41ff-bc78-491c64dea4fc] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.034368194s
addons_test.go:583: (dbg) Run:  kubectl --context addons-228070 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-228070 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-228070 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-228070 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-228070 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-228070 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-228070 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-228070 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [40c23b65-3bd3-4526-96a1-30fa85a4b97a] Pending
helpers_test.go:344: "task-pv-pod-restore" [40c23b65-3bd3-4526-96a1-30fa85a4b97a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
addons_test.go:620: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod-restore" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:620: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-228070 -n addons-228070
addons_test.go:620: TestAddons/parallel/CSI: showing logs for failed pods as of 2023-10-24 19:33:30.055939052 +0000 UTC m=+601.098976356
addons_test.go:620: (dbg) Run:  kubectl --context addons-228070 describe po task-pv-pod-restore -n default
addons_test.go:620: (dbg) kubectl --context addons-228070 describe po task-pv-pod-restore -n default:
Name:             task-pv-pod-restore
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-228070/192.168.49.2
Start Time:       Tue, 24 Oct 2023 19:27:29 +0000
Labels:           app=task-pv-pod-restore
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjt45 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc-restore
ReadOnly:   false
kube-api-access-rjt45:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m                  default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-228070
Warning  Failed     2m19s               kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    89s (x4 over 6m1s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     59s (x3 over 5m5s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     59s (x4 over 5m5s)  kubelet            Error: ErrImagePull
Normal   BackOff    32s (x7 over 5m4s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     32s (x7 over 5m4s)  kubelet            Error: ImagePullBackOff
addons_test.go:620: (dbg) Run:  kubectl --context addons-228070 logs task-pv-pod-restore -n default
addons_test.go:620: (dbg) Non-zero exit: kubectl --context addons-228070 logs task-pv-pod-restore -n default: exit status 1 (106.95778ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod-restore" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:620: kubectl --context addons-228070 logs task-pv-pod-restore -n default: exit status 1
addons_test.go:621: failed waiting for pod task-pv-pod-restore: app=task-pv-pod-restore within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-228070
helpers_test.go:235: (dbg) docker inspect addons-228070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40",
	        "Created": "2023-10-24T19:24:21.412947887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1118596,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:24:21.72306644Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/hosts",
	        "LogPath": "/var/lib/docker/containers/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40/b8190648a2494c03a66084944e6e666a54f0e4f720cbacccd493bf0c1ef9fb40-json.log",
	        "Name": "/addons-228070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-228070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-228070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1342b74d74b452e0cae9517227eac31573fef4763faee6dfdca49587620218da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-228070",
	                "Source": "/var/lib/docker/volumes/addons-228070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-228070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-228070",
	                "name.minikube.sigs.k8s.io": "addons-228070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0b8279e960ee9cf210571e91a7a50c0a03039aa250d378ad0b781b6177f7a86",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34210"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34209"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34208"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34207"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e0b8279e960e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-228070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b8190648a249",
	                        "addons-228070"
	                    ],
	                    "NetworkID": "269732e24e22caf879a9ab6a4e06c7cd3d21ef6dc936ec12a30edb19d0435768",
	                    "EndpointID": "b5785c518fdf9d4bf4c4ed803a4f8e63d3864a49823611f4aafdf9feed8c130d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-228070 -n addons-228070
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-228070 logs -n 25: (1.662419859s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | -p download-only-654862                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | -p download-only-654862                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| delete  | -p download-only-654862                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| delete  | -p download-only-654862                                                                     | download-only-654862   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| start   | --download-only -p                                                                          | download-docker-959559 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | download-docker-959559                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-959559                                                                   | download-docker-959559 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-775727   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | binary-mirror-775727                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38809                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-775727                                                                     | binary-mirror-775727   | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:23 UTC |
	| addons  | enable dashboard -p                                                                         | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |                     |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-228070 --wait=true                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC | 24 Oct 23 19:26 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | -p addons-228070                                                                            |                        |         |         |                     |                     |
	| ip      | addons-228070 ip                                                                            | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	| addons  | addons-228070 addons disable                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-228070 ssh cat                                                                       | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | /opt/local-path-provisioner/pvc-320b3b4e-2781-4009-93c4-e0f32e3a5a23_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-228070 addons disable                                                                | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | -p addons-228070                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:26 UTC | 24 Oct 23 19:26 UTC |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | addons-228070                                                                               |                        |         |         |                     |                     |
	| addons  | addons-228070 addons                                                                        | addons-228070          | jenkins | v1.31.2 | 24 Oct 23 19:27 UTC | 24 Oct 23 19:27 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:23:58
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:23:58.232556 1118138 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:23:58.232769 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:58.232795 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:23:58.232817 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:58.233102 1118138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:23:58.233565 1118138 out.go:303] Setting JSON to false
	I1024 19:23:58.234690 1118138 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32788,"bootTime":1698142651,"procs":384,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:23:58.234791 1118138 start.go:138] virtualization:  
	I1024 19:23:58.238010 1118138 out.go:177] * [addons-228070] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:23:58.240982 1118138 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:23:58.243032 1118138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:23:58.241121 1118138 notify.go:220] Checking for updates...
	I1024 19:23:58.245605 1118138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:23:58.247506 1118138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:23:58.249749 1118138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:23:58.251649 1118138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:23:58.254089 1118138 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:23:58.280909 1118138 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:23:58.281026 1118138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:58.357069 1118138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:58.347594636 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:58.357170 1118138 docker.go:295] overlay module found
	I1024 19:23:58.360770 1118138 out.go:177] * Using the docker driver based on user configuration
	I1024 19:23:58.363019 1118138 start.go:298] selected driver: docker
	I1024 19:23:58.363036 1118138 start.go:902] validating driver "docker" against <nil>
	I1024 19:23:58.363049 1118138 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:23:58.363666 1118138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:58.431801 1118138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:58.422643161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:58.432016 1118138 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:23:58.432236 1118138 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:23:58.434443 1118138 out.go:177] * Using Docker driver with root privileges
	I1024 19:23:58.436442 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:23:58.436466 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:23:58.436477 1118138 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:23:58.436492 1118138 start_flags.go:323] config:
	{Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:23:58.438985 1118138 out.go:177] * Starting control plane node addons-228070 in cluster addons-228070
	I1024 19:23:58.441075 1118138 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:23:58.443203 1118138 out.go:177] * Pulling base image ...
	I1024 19:23:58.445359 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:23:58.445401 1118138 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1024 19:23:58.445412 1118138 cache.go:57] Caching tarball of preloaded images
	I1024 19:23:58.445460 1118138 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:23:58.445495 1118138 preload.go:174] Found /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1024 19:23:58.445505 1118138 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:23:58.445934 1118138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json ...
	I1024 19:23:58.445967 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json: {Name:mk6577e7c79f8446f59999ab7a22676511cb2efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:23:58.462487 1118138 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:23:58.462614 1118138 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:23:58.462633 1118138 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1024 19:23:58.462638 1118138 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1024 19:23:58.462645 1118138 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:23:58.462650 1118138 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from local cache
	I1024 19:24:13.995511 1118138 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 from cached tarball
	I1024 19:24:13.995549 1118138 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:24:13.995618 1118138 start.go:365] acquiring machines lock for addons-228070: {Name:mke1bcca4f678271bb257b8b6dc020a3e38db683 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:24:13.995753 1118138 start.go:369] acquired machines lock for "addons-228070" in 104.656µs
	I1024 19:24:13.995791 1118138 start.go:93] Provisioning new machine with config: &{Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:24:13.995878 1118138 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:24:13.998404 1118138 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1024 19:24:13.998665 1118138 start.go:159] libmachine.API.Create for "addons-228070" (driver="docker")
	I1024 19:24:13.998695 1118138 client.go:168] LocalClient.Create starting
	I1024 19:24:13.998799 1118138 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 19:24:14.270046 1118138 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 19:24:14.742820 1118138 cli_runner.go:164] Run: docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:24:14.763418 1118138 cli_runner.go:211] docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:24:14.763511 1118138 network_create.go:281] running [docker network inspect addons-228070] to gather additional debugging logs...
	I1024 19:24:14.763532 1118138 cli_runner.go:164] Run: docker network inspect addons-228070
	W1024 19:24:14.784008 1118138 cli_runner.go:211] docker network inspect addons-228070 returned with exit code 1
	I1024 19:24:14.784048 1118138 network_create.go:284] error running [docker network inspect addons-228070]: docker network inspect addons-228070: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-228070 not found
	I1024 19:24:14.784061 1118138 network_create.go:286] output of [docker network inspect addons-228070]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-228070 not found
	
	** /stderr **
	I1024 19:24:14.784164 1118138 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:24:14.801967 1118138 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002993c60}
	I1024 19:24:14.802009 1118138 network_create.go:124] attempt to create docker network addons-228070 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:24:14.802074 1118138 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-228070 addons-228070
	I1024 19:24:14.871195 1118138 network_create.go:108] docker network addons-228070 192.168.49.0/24 created
	I1024 19:24:14.871226 1118138 kic.go:118] calculated static IP "192.168.49.2" for the "addons-228070" container
	I1024 19:24:14.871316 1118138 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:24:14.888319 1118138 cli_runner.go:164] Run: docker volume create addons-228070 --label name.minikube.sigs.k8s.io=addons-228070 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:24:14.906757 1118138 oci.go:103] Successfully created a docker volume addons-228070
	I1024 19:24:14.906850 1118138 cli_runner.go:164] Run: docker run --rm --name addons-228070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --entrypoint /usr/bin/test -v addons-228070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:24:17.030915 1118138 cli_runner.go:217] Completed: docker run --rm --name addons-228070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --entrypoint /usr/bin/test -v addons-228070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (2.1240206s)
	I1024 19:24:17.030945 1118138 oci.go:107] Successfully prepared a docker volume addons-228070
	I1024 19:24:17.030977 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:17.031005 1118138 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:24:17.031077 1118138 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-228070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:24:21.327431 1118138 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-228070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.296316167s)
	I1024 19:24:21.327469 1118138 kic.go:200] duration metric: took 4.296462 seconds to extract preloaded images to volume
	W1024 19:24:21.327609 1118138 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:24:21.327730 1118138 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:24:21.396851 1118138 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-228070 --name addons-228070 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-228070 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-228070 --network addons-228070 --ip 192.168.49.2 --volume addons-228070:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:24:21.731384 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Running}}
	I1024 19:24:21.757923 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:21.781989 1118138 cli_runner.go:164] Run: docker exec addons-228070 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:24:21.871431 1118138 oci.go:144] the created container "addons-228070" has a running status.
	I1024 19:24:21.871458 1118138 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa...
	I1024 19:24:22.668817 1118138 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:24:22.693239 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:22.713484 1118138 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:24:22.713503 1118138 kic_runner.go:114] Args: [docker exec --privileged addons-228070 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:24:22.804813 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:22.828070 1118138 machine.go:88] provisioning docker machine ...
	I1024 19:24:22.828100 1118138 ubuntu.go:169] provisioning hostname "addons-228070"
	I1024 19:24:22.828168 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:22.849460 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:22.849958 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:22.849981 1118138 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-228070 && echo "addons-228070" | sudo tee /etc/hostname
	I1024 19:24:23.018662 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-228070
	
	I1024 19:24:23.018749 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.046331 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:23.046754 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:23.046780 1118138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-228070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-228070/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-228070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:24:23.187139 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:24:23.187176 1118138 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 19:24:23.187216 1118138 ubuntu.go:177] setting up certificates
	I1024 19:24:23.187225 1118138 provision.go:83] configureAuth start
	I1024 19:24:23.187294 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:23.207616 1118138 provision.go:138] copyHostCerts
	I1024 19:24:23.207696 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 19:24:23.207820 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 19:24:23.207882 1118138 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 19:24:23.207928 1118138 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.addons-228070 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-228070]
	I1024 19:24:23.721200 1118138 provision.go:172] copyRemoteCerts
	I1024 19:24:23.721272 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:24:23.721313 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.741770 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:23.840877 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:24:23.870058 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1024 19:24:23.898446 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:24:23.926742 1118138 provision.go:86] duration metric: configureAuth took 739.501162ms
	I1024 19:24:23.926774 1118138 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:24:23.926959 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:23.927076 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:23.946138 1118138 main.go:141] libmachine: Using SSH client type: native
	I1024 19:24:23.946585 1118138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34210 <nil> <nil>}
	I1024 19:24:23.946607 1118138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:24:24.200040 1118138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:24:24.200062 1118138 machine.go:91] provisioned docker machine in 1.3719725s
	I1024 19:24:24.200073 1118138 client.go:171] LocalClient.Create took 10.201368675s
	I1024 19:24:24.200085 1118138 start.go:167] duration metric: libmachine.API.Create for "addons-228070" took 10.20142095s
	I1024 19:24:24.200092 1118138 start.go:300] post-start starting for "addons-228070" (driver="docker")
	I1024 19:24:24.200102 1118138 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:24:24.200171 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:24:24.200230 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.218515 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.316721 1118138 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:24:24.320822 1118138 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:24:24.320928 1118138 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:24:24.320948 1118138 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:24:24.320961 1118138 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:24:24.320972 1118138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 19:24:24.321041 1118138 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 19:24:24.321067 1118138 start.go:303] post-start completed in 120.968651ms
	I1024 19:24:24.321384 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:24.342006 1118138 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/config.json ...
	I1024 19:24:24.342294 1118138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:24:24.342350 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.360217 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.459651 1118138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:24:24.465145 1118138 start.go:128] duration metric: createHost completed in 10.4692534s
	I1024 19:24:24.465170 1118138 start.go:83] releasing machines lock for "addons-228070", held for 10.469402692s
	I1024 19:24:24.465261 1118138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-228070
	I1024 19:24:24.482884 1118138 ssh_runner.go:195] Run: cat /version.json
	I1024 19:24:24.482933 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.482941 1118138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:24:24.482998 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:24.502413 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.503796 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:24.732665 1118138 ssh_runner.go:195] Run: systemctl --version
	I1024 19:24:24.738069 1118138 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:24:24.886545 1118138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:24:24.891863 1118138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:24:24.915046 1118138 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:24:24.915123 1118138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:24:24.956798 1118138 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:24:24.956823 1118138 start.go:472] detecting cgroup driver to use...
	I1024 19:24:24.956855 1118138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:24:24.956903 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:24:24.974347 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:24:24.987765 1118138 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:24:24.987873 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:24:25.005526 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:24:25.023278 1118138 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:24:25.121237 1118138 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:24:25.231286 1118138 docker.go:214] disabling docker service ...
	I1024 19:24:25.231355 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:24:25.252148 1118138 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:24:25.265782 1118138 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:24:25.375050 1118138 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:24:25.483022 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:24:25.496398 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:24:25.516259 1118138 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:24:25.516348 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.528041 1118138 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:24:25.528156 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.543078 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.555925 1118138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:24:25.568416 1118138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:24:25.579758 1118138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:24:25.589957 1118138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:24:25.599891 1118138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:24:25.696124 1118138 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:24:25.822704 1118138 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:24:25.822816 1118138 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:24:25.828130 1118138 start.go:540] Will wait 60s for crictl version
	I1024 19:24:25.828213 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:24:25.832525 1118138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:24:25.873870 1118138 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:24:25.873993 1118138 ssh_runner.go:195] Run: crio --version
	I1024 19:24:25.916762 1118138 ssh_runner.go:195] Run: crio --version
	I1024 19:24:25.966012 1118138 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:24:25.968232 1118138 cli_runner.go:164] Run: docker network inspect addons-228070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:24:25.985261 1118138 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:24:25.989967 1118138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:26.003581 1118138 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:24:26.003659 1118138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:26.077851 1118138 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:24:26.077873 1118138 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:24:26.077932 1118138 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:24:26.121302 1118138 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:24:26.121321 1118138 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:24:26.121394 1118138 ssh_runner.go:195] Run: crio config
	I1024 19:24:26.179430 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:24:26.179460 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:24:26.179502 1118138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:24:26.179521 1118138 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-228070 NodeName:addons-228070 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:24:26.179690 1118138 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-228070"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:24:26.179795 1118138 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-228070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:24:26.179866 1118138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:24:26.190441 1118138 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:24:26.190515 1118138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:24:26.200604 1118138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1024 19:24:26.221125 1118138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:24:26.241833 1118138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1024 19:24:26.262552 1118138 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:24:26.266859 1118138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:24:26.281366 1118138 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070 for IP: 192.168.49.2
	I1024 19:24:26.281402 1118138 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.281523 1118138 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 19:24:26.719818 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt ...
	I1024 19:24:26.719859 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt: {Name:mk176e869d131afd9ab971311c554f848d81b3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.720114 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key ...
	I1024 19:24:26.720128 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key: {Name:mkc4569e00ec9c92d961853afdbc997153c81aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.720241 1118138 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 19:24:26.910532 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt ...
	I1024 19:24:26.910563 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt: {Name:mk2dd6a0990851e5951a630cf5c87b30ece8682c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.911299 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key ...
	I1024 19:24:26.911314 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key: {Name:mk11f100c5c66245f4cde45e3c4db06a91481f60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:26.911452 1118138 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key
	I1024 19:24:26.911469 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt with IP's: []
	I1024 19:24:27.344410 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt ...
	I1024 19:24:27.344444 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: {Name:mk399e7421911ced5fee71a70a59e55b4f23142d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.344673 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key ...
	I1024 19:24:27.344687 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.key: {Name:mkc3dffac2f70d7596c74d18e8c0cf4da87d8abc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.344781 1118138 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2
	I1024 19:24:27.344801 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:24:27.876735 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 ...
	I1024 19:24:27.876767 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2: {Name:mk27de5254dc11d4cd709dbdcd82e677694dcf42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.876964 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2 ...
	I1024 19:24:27.876978 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2: {Name:mk04e2fa075ab478d73c4d86c4ae72e310d34944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:27.877078 1118138 certs.go:337] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt
	I1024 19:24:27.877155 1118138 certs.go:341] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key
	I1024 19:24:27.877207 1118138 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key
	I1024 19:24:27.877230 1118138 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt with IP's: []
	I1024 19:24:28.121992 1118138 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt ...
	I1024 19:24:28.122028 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt: {Name:mk6cdf942fe9e14df7951e6b4e10399fd12acdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:28.122782 1118138 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key ...
	I1024 19:24:28.122800 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key: {Name:mk4564b5419d3b8595885ab9ca5c11a2f75bfb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:28.123413 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:24:28.123469 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:24:28.123499 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:24:28.123527 1118138 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 19:24:28.124189 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:24:28.151752 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:24:28.179813 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:24:28.207704 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:24:28.235828 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:24:28.265128 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:24:28.291893 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:24:28.319552 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:24:28.347109 1118138 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:24:28.374334 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:24:28.394552 1118138 ssh_runner.go:195] Run: openssl version
	I1024 19:24:28.401159 1118138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:24:28.412536 1118138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.416867 1118138 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.416999 1118138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:24:28.425195 1118138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:24:28.436469 1118138 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:24:28.440626 1118138 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:24:28.440692 1118138 kubeadm.go:404] StartCluster: {Name:addons-228070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-228070 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:24:28.440772 1118138 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:24:28.440837 1118138 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:24:28.485378 1118138 cri.go:89] found id: ""
	I1024 19:24:28.485449 1118138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:24:28.495571 1118138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:24:28.505651 1118138 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:24:28.505782 1118138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:24:28.515720 1118138 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:24:28.515763 1118138 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:24:28.568155 1118138 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:24:28.568379 1118138 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:24:28.613697 1118138 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:24:28.613812 1118138 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1024 19:24:28.613870 1118138 kubeadm.go:322] OS: Linux
	I1024 19:24:28.613948 1118138 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:24:28.614025 1118138 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:24:28.614103 1118138 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:24:28.614180 1118138 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:24:28.614255 1118138 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:24:28.614336 1118138 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:24:28.614399 1118138 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1024 19:24:28.614477 1118138 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1024 19:24:28.614541 1118138 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1024 19:24:28.694062 1118138 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:24:28.694189 1118138 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:24:28.694325 1118138 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:24:28.946227 1118138 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:24:28.950063 1118138 out.go:204]   - Generating certificates and keys ...
	I1024 19:24:28.950290 1118138 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:24:28.950452 1118138 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:24:29.274067 1118138 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:24:29.695234 1118138 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:24:29.854908 1118138 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:24:30.296335 1118138 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:24:30.842365 1118138 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:24:30.842766 1118138 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-228070 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:24:31.176661 1118138 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:24:31.177080 1118138 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-228070 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:24:31.851112 1118138 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:24:32.489345 1118138 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:24:32.620971 1118138 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:24:32.621298 1118138 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:24:33.029425 1118138 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:24:33.699791 1118138 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:24:33.963532 1118138 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:24:34.513839 1118138 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:24:34.514448 1118138 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:24:34.518977 1118138 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:24:34.524388 1118138 out.go:204]   - Booting up control plane ...
	I1024 19:24:34.524540 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:24:34.524617 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:24:34.524683 1118138 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:24:34.534498 1118138 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:24:34.535552 1118138 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:24:34.535827 1118138 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:24:34.636844 1118138 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:24:41.139122 1118138 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502383 seconds
	I1024 19:24:41.139248 1118138 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:24:41.156124 1118138 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:24:41.680905 1118138 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:24:41.681093 1118138 kubeadm.go:322] [mark-control-plane] Marking the node addons-228070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:24:42.193548 1118138 kubeadm.go:322] [bootstrap-token] Using token: zhjfdy.ymp8jw4z2hzsevhw
	I1024 19:24:42.195524 1118138 out.go:204]   - Configuring RBAC rules ...
	I1024 19:24:42.195645 1118138 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:24:42.203319 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:24:42.212271 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:24:42.216415 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:24:42.220711 1118138 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:24:42.225184 1118138 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:24:42.240674 1118138 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:24:42.486001 1118138 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:24:42.620694 1118138 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:24:42.621866 1118138 kubeadm.go:322] 
	I1024 19:24:42.621942 1118138 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:24:42.621949 1118138 kubeadm.go:322] 
	I1024 19:24:42.622022 1118138 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:24:42.622032 1118138 kubeadm.go:322] 
	I1024 19:24:42.622057 1118138 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:24:42.622112 1118138 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:24:42.622164 1118138 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:24:42.622175 1118138 kubeadm.go:322] 
	I1024 19:24:42.622230 1118138 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:24:42.622237 1118138 kubeadm.go:322] 
	I1024 19:24:42.622282 1118138 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:24:42.622291 1118138 kubeadm.go:322] 
	I1024 19:24:42.622340 1118138 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:24:42.622413 1118138 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:24:42.622481 1118138 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:24:42.622490 1118138 kubeadm.go:322] 
	I1024 19:24:42.622569 1118138 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:24:42.622662 1118138 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:24:42.622671 1118138 kubeadm.go:322] 
	I1024 19:24:42.622749 1118138 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zhjfdy.ymp8jw4z2hzsevhw \
	I1024 19:24:42.622851 1118138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 19:24:42.622876 1118138 kubeadm.go:322] 	--control-plane 
	I1024 19:24:42.622884 1118138 kubeadm.go:322] 
	I1024 19:24:42.622963 1118138 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:24:42.622972 1118138 kubeadm.go:322] 
	I1024 19:24:42.623053 1118138 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zhjfdy.ymp8jw4z2hzsevhw \
	I1024 19:24:42.623152 1118138 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 19:24:42.627275 1118138 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 19:24:42.627434 1118138 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:24:42.627466 1118138 cni.go:84] Creating CNI manager for ""
	I1024 19:24:42.627479 1118138 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:24:42.629976 1118138 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:24:42.632063 1118138 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:24:42.638196 1118138 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:24:42.638217 1118138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:24:42.676499 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:24:43.538699 1118138 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:24:43.538845 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.538920 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=addons-228070 minikube.k8s.io/updated_at=2023_10_24T19_24_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.559281 1118138 ops.go:34] apiserver oom_adj: -16
	I1024 19:24:43.684322 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:43.808875 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:44.420675 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:44.920535 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:45.421205 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:45.921505 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:46.421034 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:46.921121 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:47.420585 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:47.920597 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:48.420585 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:48.921507 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:49.421162 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:49.921103 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:50.421526 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:50.921344 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:51.421021 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:51.921249 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:52.420507 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:52.921134 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:53.420972 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:53.921408 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:54.420578 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:54.921450 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:55.420573 1118138 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:24:55.514709 1118138 kubeadm.go:1081] duration metric: took 11.975908408s to wait for elevateKubeSystemPrivileges.
	I1024 19:24:55.514743 1118138 kubeadm.go:406] StartCluster complete in 27.074072739s
	I1024 19:24:55.514760 1118138 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:55.514888 1118138 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:24:55.515268 1118138 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:24:55.515453 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:24:55.515728 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:55.515841 1118138 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1024 19:24:55.515930 1118138 addons.go:69] Setting volumesnapshots=true in profile "addons-228070"
	I1024 19:24:55.515945 1118138 addons.go:231] Setting addon volumesnapshots=true in "addons-228070"
	I1024 19:24:55.515978 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.516428 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.516879 1118138 addons.go:69] Setting cloud-spanner=true in profile "addons-228070"
	I1024 19:24:55.516895 1118138 addons.go:231] Setting addon cloud-spanner=true in "addons-228070"
	I1024 19:24:55.516950 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.517317 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.517839 1118138 addons.go:69] Setting metrics-server=true in profile "addons-228070"
	I1024 19:24:55.517871 1118138 addons.go:231] Setting addon metrics-server=true in "addons-228070"
	I1024 19:24:55.517930 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.518375 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.518807 1118138 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-228070"
	I1024 19:24:55.518848 1118138 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-228070"
	I1024 19:24:55.518878 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.519239 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.527631 1118138 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-228070"
	I1024 19:24:55.529901 1118138 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-228070"
	I1024 19:24:55.529966 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.530390 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532274 1118138 addons.go:69] Setting registry=true in profile "addons-228070"
	I1024 19:24:55.555531 1118138 addons.go:231] Setting addon registry=true in "addons-228070"
	I1024 19:24:55.555637 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.556101 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528601 1118138 addons.go:69] Setting default-storageclass=true in profile "addons-228070"
	I1024 19:24:55.559683 1118138 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-228070"
	I1024 19:24:55.560033 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528614 1118138 addons.go:69] Setting gcp-auth=true in profile "addons-228070"
	I1024 19:24:55.587643 1118138 mustload.go:65] Loading cluster: addons-228070
	I1024 19:24:55.587930 1118138 config.go:182] Loaded profile config "addons-228070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:24:55.588293 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532412 1118138 addons.go:69] Setting storage-provisioner=true in profile "addons-228070"
	I1024 19:24:55.597285 1118138 addons.go:231] Setting addon storage-provisioner=true in "addons-228070"
	I1024 19:24:55.597367 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.528620 1118138 addons.go:69] Setting ingress=true in profile "addons-228070"
	I1024 19:24:55.602093 1118138 addons.go:231] Setting addon ingress=true in "addons-228070"
	I1024 19:24:55.602178 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.602656 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528628 1118138 addons.go:69] Setting inspektor-gadget=true in profile "addons-228070"
	I1024 19:24:55.633107 1118138 addons.go:231] Setting addon inspektor-gadget=true in "addons-228070"
	I1024 19:24:55.633190 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.633664 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.528624 1118138 addons.go:69] Setting ingress-dns=true in profile "addons-228070"
	I1024 19:24:55.635263 1118138 addons.go:231] Setting addon ingress-dns=true in "addons-228070"
	I1024 19:24:55.635345 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.635801 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.532420 1118138 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-228070"
	I1024 19:24:55.655539 1118138 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-228070"
	I1024 19:24:55.655895 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.678920 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.717331 1118138 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1024 19:24:55.742214 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:24:55.742235 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:24:55.742297 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.750354 1118138 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1024 19:24:55.742166 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:24:55.758510 1118138 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-228070" context rescaled to 1 replicas
	I1024 19:24:55.760789 1118138 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1024 19:24:55.767484 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1024 19:24:55.767492 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1024 19:24:55.768762 1118138 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1024 19:24:55.768765 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1024 19:24:55.768779 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1024 19:24:55.768827 1118138 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:24:55.772174 1118138 out.go:177] * Verifying Kubernetes components...
	I1024 19:24:55.770287 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.773632 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:24:55.774217 1118138 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:24:55.774226 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1024 19:24:55.788625 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1024 19:24:55.793353 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1024 19:24:55.795552 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1024 19:24:55.800312 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1024 19:24:55.800561 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1024 19:24:55.809899 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1024 19:24:55.809966 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.801443 1118138 addons.go:231] Setting addon default-storageclass=true in "addons-228070"
	I1024 19:24:55.801595 1118138 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:24:55.801605 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1024 19:24:55.808293 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.819949 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1024 19:24:55.812652 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1024 19:24:55.812658 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.3
	I1024 19:24:55.812684 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.812693 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1024 19:24:55.812752 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.846835 1118138 out.go:177]   - Using image docker.io/registry:2.8.3
	I1024 19:24:55.848943 1118138 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1024 19:24:55.845726 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.845788 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.872333 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1024 19:24:55.872352 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1024 19:24:55.872414 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.890840 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:24:55.851076 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1024 19:24:55.890142 1118138 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-228070"
	I1024 19:24:55.893193 1118138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:24:55.893200 1118138 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1024 19:24:55.893208 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1024 19:24:55.894651 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.894854 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:24:55.895298 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:24:55.904638 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:24:55.904724 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.906410 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:24:55.908602 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:24:55.910539 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1024 19:24:55.910562 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1024 19:24:55.910629 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.926387 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:55.930149 1118138 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:24:55.930168 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1024 19:24:55.930228 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:55.992248 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.081004 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.087333 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.149997 1118138 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:24:56.150017 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:24:56.150083 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:56.150405 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.151384 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.178080 1118138 out.go:177]   - Using image docker.io/busybox:stable
	I1024 19:24:56.180004 1118138 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1024 19:24:56.182144 1118138 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:24:56.182165 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1024 19:24:56.182232 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:24:56.194622 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.201976 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.215766 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.216334 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.245011 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.256743 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:24:56.377119 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:24:56.377187 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1024 19:24:56.533711 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1024 19:24:56.593659 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1024 19:24:56.610472 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:24:56.610496 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:24:56.615550 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1024 19:24:56.615572 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1024 19:24:56.620214 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:24:56.630230 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1024 19:24:56.630253 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1024 19:24:56.660552 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1024 19:24:56.708016 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1024 19:24:56.708040 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1024 19:24:56.728169 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1024 19:24:56.730466 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1024 19:24:56.730486 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1024 19:24:56.788208 1118138 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:24:56.788232 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:24:56.799812 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1024 19:24:56.799835 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1024 19:24:56.803178 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:24:56.820634 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1024 19:24:56.820658 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1024 19:24:56.835214 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1024 19:24:56.835238 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1024 19:24:56.840861 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:24:56.917240 1118138 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:24:56.917265 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1024 19:24:56.980017 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1024 19:24:56.980039 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1024 19:24:56.983461 1118138 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1024 19:24:56.983482 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1024 19:24:56.992223 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:24:57.007277 1118138 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.246351794s)
	I1024 19:24:57.007307 1118138 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:24:57.007352 1118138 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.205892241s)
	I1024 19:24:57.008162 1118138 node_ready.go:35] waiting up to 6m0s for node "addons-228070" to be "Ready" ...
	I1024 19:24:57.012202 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1024 19:24:57.012225 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1024 19:24:57.103718 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1024 19:24:57.156353 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1024 19:24:57.156378 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1024 19:24:57.165535 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1024 19:24:57.165557 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1024 19:24:57.234396 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1024 19:24:57.234422 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1024 19:24:57.374340 1118138 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:24:57.374365 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1024 19:24:57.393545 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1024 19:24:57.393570 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1024 19:24:57.415728 1118138 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1024 19:24:57.415753 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1024 19:24:57.505624 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:24:57.510751 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1024 19:24:57.510776 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1024 19:24:57.520254 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1024 19:24:57.520277 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1024 19:24:57.588864 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1024 19:24:57.588889 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1024 19:24:57.624493 1118138 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:24:57.624519 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1024 19:24:57.650519 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1024 19:24:57.650546 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1024 19:24:57.780782 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1024 19:24:57.864333 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1024 19:24:57.864357 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1024 19:24:57.976757 1118138 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:24:57.976782 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1024 19:24:58.230749 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1024 19:24:59.164004 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:00.071163 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.477434466s)
	I1024 19:25:00.071286 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.53755175s)
	I1024 19:25:01.478199 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:01.622742 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.00249119s)
	I1024 19:25:01.622777 1118138 addons.go:467] Verifying addon ingress=true in "addons-228070"
	I1024 19:25:01.624955 1118138 out.go:177] * Verifying ingress addon...
	I1024 19:25:01.622965 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.962387204s)
	I1024 19:25:01.623003 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.894809338s)
	I1024 19:25:01.623026 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.819827234s)
	I1024 19:25:01.623063 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.782179714s)
	I1024 19:25:01.623118 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.630869131s)
	I1024 19:25:01.623154 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.5194104s)
	I1024 19:25:01.623245 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.117595957s)
	I1024 19:25:01.623312 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.842494726s)
	I1024 19:25:01.625526 1118138 addons.go:467] Verifying addon metrics-server=true in "addons-228070"
	I1024 19:25:01.625544 1118138 addons.go:467] Verifying addon registry=true in "addons-228070"
	W1024 19:25:01.625585 1118138 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:25:01.628811 1118138 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:25:01.630533 1118138 out.go:177] * Verifying registry addon...
	I1024 19:25:01.633531 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1024 19:25:01.630699 1118138 retry.go:31] will retry after 141.318584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1024 19:25:01.650278 1118138 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:25:01.650352 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:01.660265 1118138 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:25:01.660285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:01.662407 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1024 19:25:01.666883 1118138 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1024 19:25:01.671515 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:01.775942 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1024 19:25:01.991669 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.760867475s)
	I1024 19:25:01.991752 1118138 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-228070"
	I1024 19:25:02.000854 1118138 out.go:177] * Verifying csi-hostpath-driver addon...
	I1024 19:25:02.003831 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1024 19:25:02.024658 1118138 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:25:02.024725 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.043302 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.167189 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:02.176485 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:02.550956 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:02.643006 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1024 19:25:02.643115 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:25:02.683209 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:25:02.688567 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:02.702592 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:02.908581 1118138 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1024 19:25:02.934794 1118138 addons.go:231] Setting addon gcp-auth=true in "addons-228070"
	I1024 19:25:02.934867 1118138 host.go:66] Checking if "addons-228070" exists ...
	I1024 19:25:02.935424 1118138 cli_runner.go:164] Run: docker container inspect addons-228070 --format={{.State.Status}}
	I1024 19:25:02.974249 1118138 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1024 19:25:02.974307 1118138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-228070
	I1024 19:25:03.009972 1118138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34210 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/addons-228070/id_rsa Username:docker}
	I1024 19:25:03.060334 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:03.173140 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:03.196074 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:03.308478 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.532415403s)
	I1024 19:25:03.310825 1118138 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1024 19:25:03.312888 1118138 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1024 19:25:03.315012 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1024 19:25:03.315035 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1024 19:25:03.375614 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1024 19:25:03.375676 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1024 19:25:03.457143 1118138 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:25:03.457211 1118138 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1024 19:25:03.506502 1118138 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1024 19:25:03.563037 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:03.695660 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:03.703154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:03.957133 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:04.048989 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:04.167399 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:04.176688 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:04.561282 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:04.616861 1118138 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.110273178s)
	I1024 19:25:04.618297 1118138 addons.go:467] Verifying addon gcp-auth=true in "addons-228070"
	I1024 19:25:04.620171 1118138 out.go:177] * Verifying gcp-auth addon...
	I1024 19:25:04.623255 1118138 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1024 19:25:04.629090 1118138 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1024 19:25:04.629110 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:04.633620 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:04.666961 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:04.687487 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:05.047483 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:05.137843 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:05.167054 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:05.175999 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:05.548408 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:05.638092 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:05.668131 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:05.678331 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:06.048829 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:06.137935 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:06.167611 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:06.176038 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:06.456859 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:06.551342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:06.640930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:06.667669 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:06.678564 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:07.049559 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:07.137966 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:07.167029 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:07.177952 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:07.557431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:07.639498 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:07.668043 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:07.676623 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:08.049413 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:08.138381 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:08.176369 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:08.183560 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:08.457390 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:08.547734 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:08.637430 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:08.666857 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:08.675957 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:09.049270 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:09.137807 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:09.167323 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:09.175484 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:09.548606 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:09.637876 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:09.666888 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:09.675702 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.048859 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:10.137561 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:10.167618 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:10.175777 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.549186 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:10.637654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:10.666832 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:10.675668 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:10.957629 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:11.048791 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:11.138150 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:11.166669 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:11.175818 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:11.547819 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:11.644608 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:11.673393 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:11.683627 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:12.048336 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:12.137203 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:12.167502 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:12.175742 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:12.548152 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:12.637914 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:12.666472 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:12.675550 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:13.048242 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:13.138046 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:13.166742 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:13.175688 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:13.456416 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:13.548046 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:13.637519 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:13.668226 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:13.676050 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:14.048664 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:14.137433 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:14.167121 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:14.175959 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:14.548197 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:14.637655 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:14.666757 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:14.675798 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:15.048537 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:15.137664 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:15.167087 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:15.176238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:15.456812 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:15.547770 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:15.637222 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:15.667137 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:15.676119 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:16.048662 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:16.137507 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:16.166742 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:16.175739 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:16.548796 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:16.638212 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:16.666900 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:16.676037 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:17.047957 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:17.137397 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:17.167384 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:17.176110 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:17.457316 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:17.548431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:17.637849 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:17.666812 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:17.675733 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:18.048406 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:18.137978 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:18.167197 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:18.176099 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:18.548216 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:18.637794 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:18.667302 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:18.676070 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.048195 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:19.137114 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:19.167078 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:19.176126 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.547421 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:19.637355 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:19.667096 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:19.676057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:19.956353 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:20.048248 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:20.137057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:20.167515 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:20.175834 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:20.547996 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:20.637631 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:20.666673 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:20.675208 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.048317 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:21.138086 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:21.167102 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:21.175970 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.551721 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:21.638219 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:21.668421 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:21.676505 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:21.956862 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:22.048176 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:22.138293 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:22.167065 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:22.175907 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:22.548247 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:22.637134 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:22.666761 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:22.678698 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.048314 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:23.137703 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:23.167379 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:23.176312 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.548227 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:23.637029 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:23.667023 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:23.675818 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:23.957234 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:24.048711 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:24.137654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:24.167287 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:24.176222 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:24.547890 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:24.637938 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:24.666441 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:24.676541 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:25.048950 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:25.137691 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:25.167039 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:25.176077 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:25.547659 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:25.638189 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:25.667322 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:25.676417 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:26.049053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:26.137885 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:26.166751 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:26.175632 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:26.457148 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:26.548791 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:26.638179 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:26.667010 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:26.675887 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:27.048468 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:27.137348 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:27.167014 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:27.175973 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:27.548313 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:27.637367 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:27.667109 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:27.676910 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:28.048207 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:28.137649 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:28.167498 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:28.175338 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:28.457365 1118138 node_ready.go:58] node "addons-228070" has status "Ready":"False"
	I1024 19:25:28.547596 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:28.637231 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:28.666764 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:28.675810 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.034141 1118138 node_ready.go:49] node "addons-228070" has status "Ready":"True"
	I1024 19:25:29.034167 1118138 node_ready.go:38] duration metric: took 32.025981539s waiting for node "addons-228070" to be "Ready" ...
	I1024 19:25:29.034178 1118138 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:29.070286 1118138 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:29.077210 1118138 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1024 19:25:29.077240 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:29.139897 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:29.178330 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:29.257994 1118138 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1024 19:25:29.258057 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.550173 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:29.675053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:29.692386 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:29.693026 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.051913 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:30.141201 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:30.168607 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.177431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:30.553394 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:30.641823 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:30.668842 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:30.681287 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.051298 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:31.144238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:31.144751 1118138 pod_ready.go:102] pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:31.167179 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:31.177693 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.552443 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:31.653610 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:31.665594 1118138 pod_ready.go:92] pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.665664 1118138 pod_ready.go:81] duration metric: took 2.595301442s waiting for pod "coredns-5dd5756b68-fhbrz" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.665701 1118138 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.686763 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:31.687850 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:31.691046 1118138 pod_ready.go:92] pod "etcd-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.691112 1118138 pod_ready.go:81] duration metric: took 25.377502ms waiting for pod "etcd-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.691141 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.710436 1118138 pod_ready.go:92] pod "kube-apiserver-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.710550 1118138 pod_ready.go:81] duration metric: took 19.389578ms waiting for pod "kube-apiserver-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.710591 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.724653 1118138 pod_ready.go:92] pod "kube-controller-manager-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.724720 1118138 pod_ready.go:81] duration metric: took 14.072134ms waiting for pod "kube-controller-manager-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.724762 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qtmf6" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.760154 1118138 pod_ready.go:92] pod "kube-proxy-qtmf6" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:31.760223 1118138 pod_ready.go:81] duration metric: took 35.436292ms waiting for pod "kube-proxy-qtmf6" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:31.760249 1118138 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.049047 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:32.137722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:32.158108 1118138 pod_ready.go:92] pod "kube-scheduler-addons-228070" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:32.158179 1118138 pod_ready.go:81] duration metric: took 397.910787ms waiting for pod "kube-scheduler-addons-228070" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.158206 1118138 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:32.167553 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:32.178380 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:32.558065 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:32.638024 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:32.667016 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:32.678177 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:33.052325 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:33.138035 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:33.167686 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:33.177289 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:33.550154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:33.637974 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:33.668029 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:33.677154 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:34.050500 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:34.140019 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:34.168110 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:34.181352 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:34.465446 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:34.557029 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:34.639300 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:34.668777 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:34.683599 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:35.050712 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:35.138512 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:35.167761 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:35.176708 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:35.550798 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:35.639033 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:35.669657 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:35.678366 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.049964 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:36.137268 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:36.177056 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:36.179940 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.551160 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:36.639255 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:36.680308 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:36.681236 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:36.965514 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:37.051569 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:37.138061 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:37.167444 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:37.176119 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:37.551063 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:37.644213 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:37.669262 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:37.677693 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:38.050170 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:38.138875 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:38.168318 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:38.180176 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:38.552100 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:38.656573 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:38.667359 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:38.677447 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:39.050594 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:39.139541 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:39.167474 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:39.178265 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:39.465326 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:39.550816 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:39.638190 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:39.670307 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:39.682867 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:40.050650 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:40.138713 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:40.170891 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:40.179657 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:40.571117 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:40.638518 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:40.669795 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:40.680016 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:41.050792 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:41.137492 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:41.167497 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:41.176312 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:41.466037 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:41.550912 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:41.644696 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:41.671257 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:41.680933 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:42.051645 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:42.141342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:42.168900 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:42.178871 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:42.552892 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:42.640007 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:42.668826 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:42.678797 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.049671 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:43.138027 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:43.167060 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:43.176342 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.549542 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:43.646671 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:43.672335 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:43.676907 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:43.965845 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:44.055937 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:44.141611 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:44.167507 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:44.176720 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:44.555481 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:44.639439 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:44.667736 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:44.679273 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.051963 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:45.138363 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:45.167583 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:45.177022 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.549472 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:45.639951 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:45.667136 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:45.676438 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:45.966843 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:46.049446 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:46.138589 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:46.167423 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:46.177640 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:46.552334 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:46.641522 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:46.668152 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:46.677325 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:47.050545 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:47.137210 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:47.169085 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:47.176822 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:47.552652 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:47.638357 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:47.667978 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:47.678636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:48.051116 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:48.138601 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:48.167897 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:48.184725 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:48.494839 1118138 pod_ready.go:102] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:48.549927 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:48.641395 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:48.666890 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:48.676382 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:49.049446 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:49.138091 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:49.167535 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:49.176279 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:49.465341 1118138 pod_ready.go:92] pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:49.465367 1118138 pod_ready.go:81] duration metric: took 17.307141735s waiting for pod "metrics-server-7c66d45ddc-fgmf7" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:49.465386 1118138 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:49.551150 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:49.637987 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:49.667749 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:49.678102 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:50.050654 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:50.217730 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:50.218190 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:50.218933 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:50.554528 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:50.641361 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:50.682392 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:50.687631 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:51.049199 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:51.137704 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:51.168463 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:51.177677 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:51.492043 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:51.550906 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:51.638183 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:51.674434 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:51.691064 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:52.050435 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:52.137400 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:52.168438 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:52.177327 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:52.558848 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:52.638579 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:52.668773 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:52.702148 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.052836 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:53.138995 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:53.171292 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:53.178108 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.503293 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:53.549138 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:53.638175 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:53.691317 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:53.692499 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.049357 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:54.140636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:54.167499 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.177841 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:54.550143 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:54.637647 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:54.668031 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:54.687666 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.049335 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:55.138269 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:55.167596 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:55.176123 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.549971 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:55.637888 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:55.668877 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:55.677313 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:55.993500 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:56.050785 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:56.137574 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:56.167804 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:56.177973 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:56.554138 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:56.638676 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:56.668091 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:56.678702 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:57.057672 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:57.137823 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:57.168562 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:57.177433 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:57.549378 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:57.637987 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:57.667910 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:57.677985 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.052189 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:58.137981 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:58.167941 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:58.186954 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.492532 1118138 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"False"
	I1024 19:25:58.550297 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:58.640004 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:58.668206 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:58.677013 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:58.998562 1118138 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace has status "Ready":"True"
	I1024 19:25:58.998588 1118138 pod_ready.go:81] duration metric: took 9.533171772s waiting for pod "nvidia-device-plugin-daemonset-vnscp" in "kube-system" namespace to be "Ready" ...
	I1024 19:25:58.998610 1118138 pod_ready.go:38] duration metric: took 29.964420406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:25:58.998624 1118138 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:25:58.998648 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:25:58.998717 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:25:59.052962 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:59.074966 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:25:59.075025 1118138 cri.go:89] found id: ""
	I1024 19:25:59.075053 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:25:59.075141 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.080306 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:25:59.080419 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:25:59.138086 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:59.147258 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:25:59.147327 1118138 cri.go:89] found id: ""
	I1024 19:25:59.147349 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:25:59.147440 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.152769 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:25:59.152888 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:25:59.170363 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:59.181241 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:59.216958 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:25:59.217025 1118138 cri.go:89] found id: ""
	I1024 19:25:59.217046 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:25:59.217134 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.221525 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:25:59.221652 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:25:59.279165 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:25:59.279236 1118138 cri.go:89] found id: ""
	I1024 19:25:59.279258 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:25:59.279340 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.283834 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:25:59.283965 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:25:59.330306 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:25:59.330381 1118138 cri.go:89] found id: ""
	I1024 19:25:59.330403 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:25:59.330489 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.335139 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:25:59.335223 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:25:59.382901 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:25:59.382924 1118138 cri.go:89] found id: ""
	I1024 19:25:59.382932 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:25:59.382984 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.387577 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:25:59.387722 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:25:59.430049 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:25:59.430071 1118138 cri.go:89] found id: ""
	I1024 19:25:59.430079 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:25:59.430135 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:25:59.434826 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:25:59.434852 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 19:25:59.491345 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491576 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491757 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.491955 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503120 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503320 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503501 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:25:59.503702 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:25:59.529540 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:25:59.529577 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:25:59.550862 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:25:59.559693 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:25:59.559724 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:25:59.638211 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:25:59.667558 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:25:59.677238 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:25:59.765370 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:25:59.765403 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:25:59.979611 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:25:59.979646 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:00.066268 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:00.107636 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:00.107717 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:00.168106 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:00.190902 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:00.202272 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:00.202304 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:00.206741 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:00.276553 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:00.276586 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:00.381895 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:00.381930 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:00.468984 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:00.469016 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:00.549873 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:00.549904 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:00.553722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:00.617058 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:00.617086 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:00.638071 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:00.668123 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:00.687662 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:00.763756 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:00.763834 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:00.763913 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:00.764079 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764131 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764162 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764200 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:00.764251 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:00.764283 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:00.764303 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:01.050195 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:01.138458 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:01.174639 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:01.180073 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:01.549518 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:01.638730 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:01.667586 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:01.678196 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:02.051607 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:02.138205 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:02.168789 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:02.177346 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:02.557017 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:02.637668 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:02.667691 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:02.677112 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:03.049216 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:03.137923 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:03.167290 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:03.176769 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:03.549623 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:03.637651 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:03.667650 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:03.676267 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:04.049251 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:04.137633 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:04.167000 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:04.176172 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:04.548816 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:04.637406 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:04.672296 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:04.676604 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:05.050090 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:05.138285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:05.168493 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:05.177297 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:05.612910 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:05.677140 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:05.696655 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:05.715121 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:06.063301 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:06.137974 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:06.172318 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:06.179253 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:06.549930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:06.647456 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:06.687238 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:06.697510 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:07.056030 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:07.139266 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:07.170661 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:07.178407 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:07.558281 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:07.637732 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:07.669002 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:07.677661 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:08.051332 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:08.139445 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:08.169030 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:08.178874 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:08.560437 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:08.639245 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:08.670828 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:08.678293 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:09.049111 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:09.137814 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:09.186501 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:09.188941 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1024 19:26:09.549087 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:09.640480 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:09.667484 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:09.675964 1118138 kapi.go:107] duration metric: took 1m8.042430621s to wait for kubernetes.io/minikube-addons=registry ...
	I1024 19:26:10.049130 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:10.137790 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:10.173458 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:10.549883 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:10.637989 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:10.668363 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:10.765716 1118138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:26:10.816897 1118138 api_server.go:72] duration metric: took 1m15.047070166s to wait for apiserver process to appear ...
	I1024 19:26:10.816962 1118138 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:26:10.817005 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:26:10.817090 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:26:11.055549 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:11.141842 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:11.141904 1118138 cri.go:89] found id: ""
	I1024 19:26:11.141925 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:26:11.142018 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.145144 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:11.160794 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:26:11.160939 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:26:11.168138 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:11.286149 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:11.286212 1118138 cri.go:89] found id: ""
	I1024 19:26:11.286232 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:26:11.286318 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.296577 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:26:11.296692 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:26:11.492243 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:11.492322 1118138 cri.go:89] found id: ""
	I1024 19:26:11.492343 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:26:11.492421 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.504920 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:26:11.505038 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:26:11.556497 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:11.647936 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:11.668369 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:11.752616 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:11.752676 1118138 cri.go:89] found id: ""
	I1024 19:26:11.752697 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:26:11.752786 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:11.776044 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:26:11.776159 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:26:12.020780 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:12.020852 1118138 cri.go:89] found id: ""
	I1024 19:26:12.020875 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:26:12.020970 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.030743 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:26:12.030879 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:26:12.050844 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:12.138728 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:12.143821 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:12.143847 1118138 cri.go:89] found id: ""
	I1024 19:26:12.143856 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:26:12.143921 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.154391 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:26:12.154475 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:26:12.169119 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:12.227381 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:12.227467 1118138 cri.go:89] found id: ""
	I1024 19:26:12.227498 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:26:12.227593 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:12.233283 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:26:12.233344 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1024 19:26:12.299702 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300024 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300261 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.300548 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.316705 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.316995 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.317238 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:12.317466 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:12.349499 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:26:12.349558 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:26:12.536005 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:26:12.536039 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:12.552144 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:12.651367 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:12.662204 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:26:12.662244 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:12.669134 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:12.743017 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:12.743049 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:12.805536 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:12.805565 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:12.910481 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:12.910516 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:13.020698 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:26:13.020735 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:26:13.050439 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:13.059802 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:13.059834 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:13.139237 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:13.145344 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:13.145373 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:13.168845 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:13.217492 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:13.217523 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:13.354257 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:13.354332 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:13.436742 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:13.436817 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:13.436894 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:13.437076 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437092 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437107 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437115 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:13.437128 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:13.437139 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:13.437148 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:13.570482 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:13.640894 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:13.667635 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:14.050030 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:14.138793 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:14.167197 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:14.552636 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:14.637564 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:14.667314 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:15.050294 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:15.139741 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:15.167845 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:15.568480 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:15.638145 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:15.667482 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:16.050194 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:16.138359 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:16.169449 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:16.555477 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:16.643667 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:16.668784 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:17.050746 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:17.138264 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:17.180461 1118138 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:26:17.555281 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:17.638600 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:17.674475 1118138 kapi.go:107] duration metric: took 1m16.045670856s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:26:18.050470 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:18.143035 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:18.549700 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:18.637296 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:19.049286 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:19.137729 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:19.549053 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:19.638300 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:20.049961 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:20.137901 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:20.550121 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:20.637930 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:21.050026 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:21.138141 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:21.555123 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:21.638944 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:22.052768 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:22.140841 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:22.550359 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:22.637630 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.049206 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:23.139011 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.438818 1118138 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:26:23.448034 1118138 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:26:23.451089 1118138 api_server.go:141] control plane version: v1.28.3
	I1024 19:26:23.451369 1118138 api_server.go:131] duration metric: took 12.634384269s to wait for apiserver health ...
	I1024 19:26:23.451404 1118138 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:26:23.451454 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1024 19:26:23.451548 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1024 19:26:23.525582 1118138 cri.go:89] found id: "af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:23.525664 1118138 cri.go:89] found id: ""
	I1024 19:26:23.525697 1118138 logs.go:284] 1 containers: [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37]
	I1024 19:26:23.526058 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.537203 1118138 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1024 19:26:23.537320 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1024 19:26:23.560449 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:23.610786 1118138 cri.go:89] found id: "ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:23.610846 1118138 cri.go:89] found id: ""
	I1024 19:26:23.610878 1118138 logs.go:284] 1 containers: [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df]
	I1024 19:26:23.610962 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.625489 1118138 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1024 19:26:23.625614 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1024 19:26:23.643239 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:23.681799 1118138 cri.go:89] found id: "ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:23.681869 1118138 cri.go:89] found id: ""
	I1024 19:26:23.681891 1118138 logs.go:284] 1 containers: [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb]
	I1024 19:26:23.681978 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.691256 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1024 19:26:23.691345 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1024 19:26:23.768281 1118138 cri.go:89] found id: "30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:23.768304 1118138 cri.go:89] found id: ""
	I1024 19:26:23.768321 1118138 logs.go:284] 1 containers: [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b]
	I1024 19:26:23.768377 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.774094 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1024 19:26:23.774174 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1024 19:26:23.848548 1118138 cri.go:89] found id: "a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:23.848572 1118138 cri.go:89] found id: ""
	I1024 19:26:23.848581 1118138 logs.go:284] 1 containers: [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5]
	I1024 19:26:23.848652 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.853358 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1024 19:26:23.853474 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1024 19:26:23.913764 1118138 cri.go:89] found id: "837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:23.913835 1118138 cri.go:89] found id: ""
	I1024 19:26:23.913857 1118138 logs.go:284] 1 containers: [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b]
	I1024 19:26:23.913940 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.920118 1118138 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1024 19:26:23.920228 1118138 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1024 19:26:23.979651 1118138 cri.go:89] found id: "05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:23.979720 1118138 cri.go:89] found id: ""
	I1024 19:26:23.979744 1118138 logs.go:284] 1 containers: [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703]
	I1024 19:26:23.979826 1118138 ssh_runner.go:195] Run: which crictl
	I1024 19:26:23.984483 1118138 logs.go:123] Gathering logs for kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] ...
	I1024 19:26:23.984547 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37"
	I1024 19:26:24.061929 1118138 logs.go:123] Gathering logs for etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] ...
	I1024 19:26:24.062010 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df"
	I1024 19:26:24.077078 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:24.141274 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:24.150373 1118138 logs.go:123] Gathering logs for kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] ...
	I1024 19:26:24.150443 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5"
	I1024 19:26:24.205385 1118138 logs.go:123] Gathering logs for kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] ...
	I1024 19:26:24.205461 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703"
	I1024 19:26:24.264010 1118138 logs.go:123] Gathering logs for container status ...
	I1024 19:26:24.264085 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1024 19:26:24.362394 1118138 logs.go:123] Gathering logs for kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] ...
	I1024 19:26:24.362465 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b"
	I1024 19:26:24.496338 1118138 logs.go:123] Gathering logs for CRI-O ...
	I1024 19:26:24.496452 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1024 19:26:24.553677 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:24.602570 1118138 logs.go:123] Gathering logs for kubelet ...
	I1024 19:26:24.602641 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1024 19:26:24.638442 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1024 19:26:24.666148 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.504813    1357 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666417 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.504846    1357 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666624 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: W1024 19:24:54.524197    1357 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.666849 1118138 logs.go:138] Found kubelet problem: Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679205 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679472 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679681 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:24.679906 1118138 logs.go:138] Found kubelet problem: Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:24.713833 1118138 logs.go:123] Gathering logs for dmesg ...
	I1024 19:26:24.713971 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1024 19:26:24.749828 1118138 logs.go:123] Gathering logs for describe nodes ...
	I1024 19:26:24.749898 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1024 19:26:24.899701 1118138 logs.go:123] Gathering logs for coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] ...
	I1024 19:26:24.899735 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb"
	I1024 19:26:24.960721 1118138 logs.go:123] Gathering logs for kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] ...
	I1024 19:26:24.960752 1118138 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b"
	I1024 19:26:25.007321 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:25.007349 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1024 19:26:25.007397 1118138 out.go:239] X Problems detected in kubelet:
	W1024 19:26:25.007410 1118138 out.go:239]   Oct 24 19:24:54 addons-228070 kubelet[1357]: E1024 19:24:54.524240    1357 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007418 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.050739    1357 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007426 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.050785    1357 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-228070" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007436 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: W1024 19:25:29.051380    1357 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	W1024 19:26:25.007445 1118138 out.go:239]   Oct 24 19:25:29 addons-228070 kubelet[1357]: E1024 19:25:29.051408    1357 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-228070" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-228070' and this object
	I1024 19:26:25.007457 1118138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:26:25.007463 1118138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:26:25.049301 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:25.137868 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:25.549167 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:25.637936 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:26.062169 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:26.137595 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:26.551683 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:26.638173 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:27.049788 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1024 19:26:27.137722 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:27.549779 1118138 kapi.go:107] duration metric: took 1m25.545904341s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1024 19:26:27.639609 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:28.137444 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:28.637332 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:29.137862 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:29.638285 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:30.137393 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:30.638011 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:31.142884 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:31.638395 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:32.137548 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:32.637680 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:33.137147 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:33.637085 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:34.137821 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:34.637284 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:35.018340 1118138 system_pods.go:59] 18 kube-system pods found
	I1024 19:26:35.018374 1118138 system_pods.go:61] "coredns-5dd5756b68-fhbrz" [ab8c6257-b394-452b-ad47-175c0704f944] Running
	I1024 19:26:35.018381 1118138 system_pods.go:61] "csi-hostpath-attacher-0" [7a7d8dc2-251a-4db2-a8d3-c61c74797d8f] Running
	I1024 19:26:35.018386 1118138 system_pods.go:61] "csi-hostpath-resizer-0" [6def26fa-40e8-47c4-8680-9528c7339358] Running
	I1024 19:26:35.018391 1118138 system_pods.go:61] "csi-hostpathplugin-zsvq4" [e00a413f-7ea8-45e5-80c6-d3f052fa7b96] Running
	I1024 19:26:35.018396 1118138 system_pods.go:61] "etcd-addons-228070" [64901b37-c071-45df-9df3-c16aabf42b04] Running
	I1024 19:26:35.018401 1118138 system_pods.go:61] "kindnet-zpk2b" [cd7fe14a-6160-4d8f-a555-181f7ffe8365] Running
	I1024 19:26:35.018406 1118138 system_pods.go:61] "kube-apiserver-addons-228070" [36b4d137-4039-4168-9c0a-3cc996475f57] Running
	I1024 19:26:35.018412 1118138 system_pods.go:61] "kube-controller-manager-addons-228070" [b7613bd0-63e4-453f-b73c-455e101f0cbf] Running
	I1024 19:26:35.018422 1118138 system_pods.go:61] "kube-ingress-dns-minikube" [f748865c-b605-4237-9edf-8387e9925319] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1024 19:26:35.018434 1118138 system_pods.go:61] "kube-proxy-qtmf6" [abf30c53-c321-472b-b6ae-08df96a309bd] Running
	I1024 19:26:35.018440 1118138 system_pods.go:61] "kube-scheduler-addons-228070" [15009577-04f0-4752-8104-ce67e82cb40d] Running
	I1024 19:26:35.018446 1118138 system_pods.go:61] "metrics-server-7c66d45ddc-fgmf7" [de24d5b2-08eb-4c8a-9c9b-3d6eb76712d8] Running
	I1024 19:26:35.018451 1118138 system_pods.go:61] "nvidia-device-plugin-daemonset-vnscp" [638ff2b2-e718-4d5a-aa20-ab6d29a35186] Running
	I1024 19:26:35.018456 1118138 system_pods.go:61] "registry-chlmt" [1869d1d7-07f4-4d9c-94d6-4bcc1e8efe3a] Running
	I1024 19:26:35.018465 1118138 system_pods.go:61] "registry-proxy-xdq2s" [16223b37-cd2a-41d2-8ebd-ee2c4fcef1a2] Running
	I1024 19:26:35.018470 1118138 system_pods.go:61] "snapshot-controller-58dbcc7b99-nrnxv" [d6325577-d6ec-4198-9f67-6baaf5e960b0] Running
	I1024 19:26:35.018476 1118138 system_pods.go:61] "snapshot-controller-58dbcc7b99-v2jmr" [75c26e55-e64d-4021-8768-3e849b1ca7b5] Running
	I1024 19:26:35.018484 1118138 system_pods.go:61] "storage-provisioner" [4f736afb-13f3-46ab-bfab-0369c68cd496] Running
	I1024 19:26:35.018489 1118138 system_pods.go:74] duration metric: took 11.567067626s to wait for pod list to return data ...
	I1024 19:26:35.018502 1118138 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:26:35.021469 1118138 default_sa.go:45] found service account: "default"
	I1024 19:26:35.021498 1118138 default_sa.go:55] duration metric: took 2.988887ms for default service account to be created ...
	I1024 19:26:35.021509 1118138 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:26:35.032057 1118138 system_pods.go:86] 18 kube-system pods found
	I1024 19:26:35.032096 1118138 system_pods.go:89] "coredns-5dd5756b68-fhbrz" [ab8c6257-b394-452b-ad47-175c0704f944] Running
	I1024 19:26:35.032104 1118138 system_pods.go:89] "csi-hostpath-attacher-0" [7a7d8dc2-251a-4db2-a8d3-c61c74797d8f] Running
	I1024 19:26:35.032109 1118138 system_pods.go:89] "csi-hostpath-resizer-0" [6def26fa-40e8-47c4-8680-9528c7339358] Running
	I1024 19:26:35.032114 1118138 system_pods.go:89] "csi-hostpathplugin-zsvq4" [e00a413f-7ea8-45e5-80c6-d3f052fa7b96] Running
	I1024 19:26:35.032120 1118138 system_pods.go:89] "etcd-addons-228070" [64901b37-c071-45df-9df3-c16aabf42b04] Running
	I1024 19:26:35.032125 1118138 system_pods.go:89] "kindnet-zpk2b" [cd7fe14a-6160-4d8f-a555-181f7ffe8365] Running
	I1024 19:26:35.032130 1118138 system_pods.go:89] "kube-apiserver-addons-228070" [36b4d137-4039-4168-9c0a-3cc996475f57] Running
	I1024 19:26:35.032137 1118138 system_pods.go:89] "kube-controller-manager-addons-228070" [b7613bd0-63e4-453f-b73c-455e101f0cbf] Running
	I1024 19:26:35.032145 1118138 system_pods.go:89] "kube-ingress-dns-minikube" [f748865c-b605-4237-9edf-8387e9925319] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1024 19:26:35.032152 1118138 system_pods.go:89] "kube-proxy-qtmf6" [abf30c53-c321-472b-b6ae-08df96a309bd] Running
	I1024 19:26:35.032164 1118138 system_pods.go:89] "kube-scheduler-addons-228070" [15009577-04f0-4752-8104-ce67e82cb40d] Running
	I1024 19:26:35.032171 1118138 system_pods.go:89] "metrics-server-7c66d45ddc-fgmf7" [de24d5b2-08eb-4c8a-9c9b-3d6eb76712d8] Running
	I1024 19:26:35.032179 1118138 system_pods.go:89] "nvidia-device-plugin-daemonset-vnscp" [638ff2b2-e718-4d5a-aa20-ab6d29a35186] Running
	I1024 19:26:35.032184 1118138 system_pods.go:89] "registry-chlmt" [1869d1d7-07f4-4d9c-94d6-4bcc1e8efe3a] Running
	I1024 19:26:35.032190 1118138 system_pods.go:89] "registry-proxy-xdq2s" [16223b37-cd2a-41d2-8ebd-ee2c4fcef1a2] Running
	I1024 19:26:35.032196 1118138 system_pods.go:89] "snapshot-controller-58dbcc7b99-nrnxv" [d6325577-d6ec-4198-9f67-6baaf5e960b0] Running
	I1024 19:26:35.032203 1118138 system_pods.go:89] "snapshot-controller-58dbcc7b99-v2jmr" [75c26e55-e64d-4021-8768-3e849b1ca7b5] Running
	I1024 19:26:35.032208 1118138 system_pods.go:89] "storage-provisioner" [4f736afb-13f3-46ab-bfab-0369c68cd496] Running
	I1024 19:26:35.032215 1118138 system_pods.go:126] duration metric: took 10.701184ms to wait for k8s-apps to be running ...
	I1024 19:26:35.032226 1118138 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:26:35.032288 1118138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:26:35.046474 1118138 system_svc.go:56] duration metric: took 14.237908ms WaitForService to wait for kubelet.
	I1024 19:26:35.046500 1118138 kubeadm.go:581] duration metric: took 1m39.276680332s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:26:35.046521 1118138 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:26:35.049848 1118138 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:26:35.049879 1118138 node_conditions.go:123] node cpu capacity is 2
	I1024 19:26:35.049890 1118138 node_conditions.go:105] duration metric: took 3.364319ms to run NodePressure ...
	I1024 19:26:35.049900 1118138 start.go:228] waiting for startup goroutines ...
	I1024 19:26:35.138012 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:35.637336 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:36.138431 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:36.638383 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:37.137407 1118138 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1024 19:26:37.638106 1118138 kapi.go:107] duration metric: took 1m33.01485004s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1024 19:26:37.640421 1118138 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-228070 cluster.
	I1024 19:26:37.642305 1118138 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1024 19:26:37.644087 1118138 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1024 19:26:37.646214 1118138 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1024 19:26:37.648060 1118138 addons.go:502] enable addons completed in 1m42.132215074s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1024 19:26:37.648135 1118138 start.go:233] waiting for cluster config update ...
	I1024 19:26:37.648161 1118138 start.go:242] writing updated cluster config ...
	I1024 19:26:37.648546 1118138 ssh_runner.go:195] Run: rm -f paused
	I1024 19:26:37.714003 1118138 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:26:37.716294 1118138 out.go:177] * Done! kubectl is now configured to use "addons-228070" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:32:01 addons-228070 crio[888]: time="2023-10-24 19:32:01.601689374Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2ac7a020-86f2-4cb7-9e02-916b340d9c0f name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:01 addons-228070 crio[888]: time="2023-10-24 19:32:01.603058985Z" level=info msg="Pulling image: docker.io/nginx:latest" id=d1731f6a-370e-4e15-a6da-e7159c428518 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:32:01 addons-228070 crio[888]: time="2023-10-24 19:32:01.605443655Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 24 19:32:08 addons-228070 crio[888]: time="2023-10-24 19:32:08.601669039Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c924a292-035e-4d20-aa4d-45ed3bdc25d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:08 addons-228070 crio[888]: time="2023-10-24 19:32:08.603535928Z" level=info msg="Image docker.io/nginx:alpine not found" id=c924a292-035e-4d20-aa4d-45ed3bdc25d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:20 addons-228070 crio[888]: time="2023-10-24 19:32:20.602215554Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=463f19d8-6441-43db-99ca-86f326ef68cd name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:20 addons-228070 crio[888]: time="2023-10-24 19:32:20.602433810Z" level=info msg="Image docker.io/nginx:alpine not found" id=463f19d8-6441-43db-99ca-86f326ef68cd name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:33 addons-228070 crio[888]: time="2023-10-24 19:32:33.602634888Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3189174b-cb56-4c53-a035-74cef21233b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:33 addons-228070 crio[888]: time="2023-10-24 19:32:33.602922763Z" level=info msg="Image docker.io/nginx:alpine not found" id=3189174b-cb56-4c53-a035-74cef21233b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:45 addons-228070 crio[888]: time="2023-10-24 19:32:45.601435366Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1ff63f29-8db8-4f7a-b9ea-331146214fc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:45 addons-228070 crio[888]: time="2023-10-24 19:32:45.601657642Z" level=info msg="Image docker.io/nginx:alpine not found" id=1ff63f29-8db8-4f7a-b9ea-331146214fc3 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:47 addons-228070 crio[888]: time="2023-10-24 19:32:47.601440611Z" level=info msg="Checking image status: docker.io/nginx:latest" id=0f7e1a52-5fa4-42df-83d5-47350a7045ac name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:47 addons-228070 crio[888]: time="2023-10-24 19:32:47.601658013Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0f7e1a52-5fa4-42df-83d5-47350a7045ac name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:56 addons-228070 crio[888]: time="2023-10-24 19:32:56.601568719Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b6ee7b51-add2-4911-94ac-6606554c3e5c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:56 addons-228070 crio[888]: time="2023-10-24 19:32:56.601830641Z" level=info msg="Image docker.io/nginx:alpine not found" id=b6ee7b51-add2-4911-94ac-6606554c3e5c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:58 addons-228070 crio[888]: time="2023-10-24 19:32:58.602535470Z" level=info msg="Checking image status: docker.io/nginx:latest" id=00d06657-d576-4983-864a-06e350c50fbc name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:32:58 addons-228070 crio[888]: time="2023-10-24 19:32:58.602770102Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=00d06657-d576-4983-864a-06e350c50fbc name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.601840524Z" level=info msg="Checking image status: docker.io/nginx:latest" id=809b7137-ad37-4f26-ba7c-9c8340f7ea80 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.602060618Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=809b7137-ad37-4f26-ba7c-9c8340f7ea80 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.601849394Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1e820fd9-a0a3-4a2f-a975-5ed65b71432c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.602252814Z" level=info msg="Image docker.io/nginx:alpine not found" id=1e820fd9-a0a3-4a2f-a975-5ed65b71432c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.603211193Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=896d8a52-b8b7-4c94-a039-67691d1ff0c2 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:33:09 addons-228070 crio[888]: time="2023-10-24 19:33:09.609297427Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 24 19:33:22 addons-228070 crio[888]: time="2023-10-24 19:33:22.602208755Z" level=info msg="Checking image status: docker.io/nginx:latest" id=2aeb3b5c-237c-4088-9dc7-d6df0bbefc3c name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:33:22 addons-228070 crio[888]: time="2023-10-24 19:33:22.602427757Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:12ef77b9fab686eea5e3fd0d6f3c7b2763eaeb657f037121335a60805d3be8a7,RepoTags:[docker.io/library/nginx:latest],RepoDigests:[docker.io/library/nginx@sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 docker.io/library/nginx@sha256:b4af4f8b6470febf45dc10f564551af682a802eda1743055a7dfc8332dffa595],Size_:196204814,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2aeb3b5c-237c-4088-9dc7-d6df0bbefc3c name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	23e68bc94b71f       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             2 minutes ago       Exited              minikube-ingress-dns                     6                   f7308c9290220       kube-ingress-dns-minikube
	a29c55d246070       ghcr.io/headlamp-k8s/headlamp@sha256:8e813897da00c345b1169d624b32e2367e5da1dbbffe33226f8a92973b816b50                                        6 minutes ago       Running             headlamp                                 0                   b8c4ed44d3106       headlamp-94b766c-tn68w
	90aa35ebcef96       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 6 minutes ago       Running             gcp-auth                                 0                   88060deb243cb       gcp-auth-d4c87556c-gq5sh
	f12c66d58fcbf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	1bd865277c8cc       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          7 minutes ago       Running             csi-provisioner                          0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	bc22eb3c5d3e6       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            7 minutes ago       Running             liveness-probe                           0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	f2d9cd1cbda4c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           7 minutes ago       Running             hostpath                                 0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	4ff4759f93747       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                7 minutes ago       Running             node-driver-registrar                    0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	fe77220d409eb       registry.k8s.io/ingress-nginx/controller@sha256:79e6b8cb9a4e9cfad53862c2aa3e98b8281cc353908517a5e636a531ad331d7c                             7 minutes ago       Running             controller                               0                   3e683cbae41c7       ingress-nginx-controller-6f48fc54bd-cskmn
	388560cb3eb8b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   7 minutes ago       Exited              patch                                    0                   6f76a7cb3d750       ingress-nginx-admission-patch-ht52w
	d0df4f62fd906       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   7 minutes ago       Running             csi-external-health-monitor-controller   0                   4515b28cf49ca       csi-hostpathplugin-zsvq4
	2b7d2d1f38e08       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      7 minutes ago       Running             volume-snapshot-controller               0                   3b32fcf319335       snapshot-controller-58dbcc7b99-nrnxv
	a5a901295ccaf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5                   7 minutes ago       Exited              create                                   0                   828972d42ca5b       ingress-nginx-admission-create-grpcs
	39576799cd883       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              7 minutes ago       Running             csi-resizer                              0                   6ac9a0bb61fe8       csi-hostpath-resizer-0
	c9bb64f813447       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      7 minutes ago       Running             volume-snapshot-controller               0                   143f92bafe19f       snapshot-controller-58dbcc7b99-v2jmr
	991a2a6d18e6a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             7 minutes ago       Running             local-path-provisioner                   0                   937b5ac4e09c1       local-path-provisioner-78b46b4d5c-n4dx9
	9a4f5374f2806       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             7 minutes ago       Running             csi-attacher                             0                   2b9da20e83cf4       csi-hostpath-attacher-0
	e7593a21d5782       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             8 minutes ago       Running             storage-provisioner                      0                   406b51454fe54       storage-provisioner
	ed04655a9a89b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             8 minutes ago       Running             coredns                                  0                   6fe415cbd9cc0       coredns-5dd5756b68-fhbrz
	05a4fa5dbaf4c       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             8 minutes ago       Running             kindnet-cni                              0                   58dbd906b345f       kindnet-zpk2b
	a568bb4094016       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                                             8 minutes ago       Running             kube-proxy                               0                   72163d1a17079       kube-proxy-qtmf6
	af521e6bd1f01       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                                             8 minutes ago       Running             kube-apiserver                           0                   ffad0e29af7ce       kube-apiserver-addons-228070
	837e6ec9f669d       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                                             8 minutes ago       Running             kube-controller-manager                  0                   80c27b65764d7       kube-controller-manager-addons-228070
	30760e9bfa89f       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                                             8 minutes ago       Running             kube-scheduler                           0                   8174ff04bb59c       kube-scheduler-addons-228070
	ba7f1603e1423       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             8 minutes ago       Running             etcd                                     0                   054f41ae6c499       etcd-addons-228070
	
	* 
	* ==> coredns [ed04655a9a89be45db69f6d5277799e1b105591c6abf990c0ea74a7d5697cbbb] <==
	* [INFO] 10.244.0.13:36388 - 41647 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002260635s
	[INFO] 10.244.0.13:42282 - 63330 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000110318s
	[INFO] 10.244.0.13:42282 - 62556 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152926s
	[INFO] 10.244.0.13:33118 - 18552 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000100381s
	[INFO] 10.244.0.13:33118 - 28276 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000061432s
	[INFO] 10.244.0.13:57352 - 22278 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059585s
	[INFO] 10.244.0.13:57352 - 14595 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036439s
	[INFO] 10.244.0.13:58946 - 47308 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000103876s
	[INFO] 10.244.0.13:58946 - 38094 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112352s
	[INFO] 10.244.0.13:35116 - 20549 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001406862s
	[INFO] 10.244.0.13:35116 - 9339 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001440331s
	[INFO] 10.244.0.13:47372 - 59052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006112s
	[INFO] 10.244.0.13:47372 - 61870 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111811s
	[INFO] 10.244.0.19:57549 - 49706 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000282591s
	[INFO] 10.244.0.19:43958 - 25922 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000408975s
	[INFO] 10.244.0.19:40902 - 54517 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000221571s
	[INFO] 10.244.0.19:58432 - 38005 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000178871s
	[INFO] 10.244.0.19:56734 - 27703 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000223269s
	[INFO] 10.244.0.19:36669 - 29902 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000266805s
	[INFO] 10.244.0.19:50379 - 5908 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002136065s
	[INFO] 10.244.0.19:57043 - 47835 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002451666s
	[INFO] 10.244.0.19:41005 - 41696 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000791734s
	[INFO] 10.244.0.19:42052 - 61210 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001354005s
	[INFO] 10.244.0.21:37151 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000200089s
	[INFO] 10.244.0.21:44132 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000131248s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-228070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-228070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=addons-228070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_24_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-228070
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-228070"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:24:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-228070
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:33:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:24:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:32:22 +0000   Tue, 24 Oct 2023 19:25:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-228070
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 bff6bc2f0e7246fcb1d863c8f524e2a6
	  System UUID:                68df10e7-4bae-46a3-a993-9195f34a2cb5
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m7s
	  default                     task-pv-pod-restore                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  gcp-auth                    gcp-auth-d4c87556c-gq5sh                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  headlamp                    headlamp-94b766c-tn68w                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  ingress-nginx               ingress-nginx-controller-6f48fc54bd-cskmn    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         8m30s
	  kube-system                 coredns-5dd5756b68-fhbrz                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m36s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 csi-hostpathplugin-zsvq4                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 etcd-addons-228070                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m49s
	  kube-system                 kindnet-zpk2b                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m37s
	  kube-system                 kube-apiserver-addons-228070                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m50s
	  kube-system                 kube-controller-manager-addons-228070        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-proxy-qtmf6                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-scheduler-addons-228070                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 snapshot-controller-58dbcc7b99-nrnxv         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 snapshot-controller-58dbcc7b99-v2jmr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  local-path-storage          local-path-provisioner-78b46b4d5c-n4dx9      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             310Mi (3%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m56s (x8 over 8m56s)  kubelet          Node addons-228070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s (x8 over 8m56s)  kubelet          Node addons-228070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x8 over 8m56s)  kubelet          Node addons-228070 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m49s                  kubelet          Node addons-228070 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m49s                  kubelet          Node addons-228070 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m49s                  kubelet          Node addons-228070 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m37s                  node-controller  Node addons-228070 event: Registered Node addons-228070 in Controller
	  Normal  NodeReady                8m3s                   kubelet          Node addons-228070 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001113] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001085] FS-Cache: N-key=[8] '80623b0000000000'
	[  +0.002635] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000bf36fe5e
	[  +0.001181] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000b7ed4e62
	[  +0.001156] FS-Cache: N-key=[8] '80623b0000000000'
	[  +3.138037] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000984] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a1cd37ca
	[  +0.001134] FS-Cache: O-key=[8] '7f623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001008] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001075] FS-Cache: N-key=[8] '7f623b0000000000'
	[  +0.302369] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001049] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001121] FS-Cache: O-key=[8] '85623b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c7864bf1
	[  +0.001088] FS-Cache: N-key=[8] '85623b0000000000'
	
	* 
	* ==> etcd [ba7f1603e142364d5fa0d5ee720df97030dbc4c7b10e1e726638f700670525df] <==
	* {"level":"info","ts":"2023-10-24T19:24:36.2137Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:24:36.213961Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:24:36.222487Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:24:36.222604Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:24:36.677777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.677901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.677951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-24T19:24:36.67802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.678142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-24T19:24:36.681881Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.685941Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-228070 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:24:36.686015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:36.687102Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:24:36.687307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:24:36.688178Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-24T19:24:36.688652Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:36.688715Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:24:36.688924Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.689045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:36.689095Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:24:59.431665Z","caller":"traceutil/trace.go:171","msg":"trace[734719481] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"112.299376ms","start":"2023-10-24T19:24:59.319352Z","end":"2023-10-24T19:24:59.431651Z","steps":["trace[734719481] 'process raft request'  (duration: 111.994294ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-24T19:24:59.475254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.535055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-10-24T19:24:59.50079Z","caller":"traceutil/trace.go:171","msg":"trace[1394886172] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:433; }","duration":"135.080754ms","start":"2023-10-24T19:24:59.365694Z","end":"2023-10-24T19:24:59.500774Z","steps":["trace[1394886172] 'agreement among raft nodes before linearized reading'  (duration: 108.609142ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [90aa35ebcef96ff423d95c233ad9c62b62805d0ca3c4783478577ca8337fc4b5] <==
	* 2023/10/24 19:26:36 GCP Auth Webhook started!
	2023/10/24 19:26:44 Ready to marshal response ...
	2023/10/24 19:26:44 Ready to write response ...
	2023/10/24 19:26:44 Ready to marshal response ...
	2023/10/24 19:26:44 Ready to write response ...
	2023/10/24 19:26:48 Ready to marshal response ...
	2023/10/24 19:26:48 Ready to write response ...
	2023/10/24 19:26:53 Ready to marshal response ...
	2023/10/24 19:26:53 Ready to write response ...
	2023/10/24 19:26:54 Ready to marshal response ...
	2023/10/24 19:26:54 Ready to write response ...
	2023/10/24 19:26:55 Ready to marshal response ...
	2023/10/24 19:26:55 Ready to write response ...
	2023/10/24 19:26:55 Ready to marshal response ...
	2023/10/24 19:26:55 Ready to write response ...
	2023/10/24 19:26:59 Ready to marshal response ...
	2023/10/24 19:26:59 Ready to write response ...
	2023/10/24 19:27:24 Ready to marshal response ...
	2023/10/24 19:27:24 Ready to write response ...
	2023/10/24 19:27:29 Ready to marshal response ...
	2023/10/24 19:27:29 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:33:31 up  9:16,  0 users,  load average: 0.07, 0.70, 1.47
	Linux addons-228070 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [05a4fa5dbaf4c3f321b8f2b86d3101de4ae4bed84009dae5a594bc2bf0512703] <==
	* I1024 19:31:28.685000       1 main.go:227] handling current node
	I1024 19:31:38.697364       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:31:38.697394       1 main.go:227] handling current node
	I1024 19:31:48.701701       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:31:48.701727       1 main.go:227] handling current node
	I1024 19:31:58.714051       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:31:58.714079       1 main.go:227] handling current node
	I1024 19:32:08.726683       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:08.726709       1 main.go:227] handling current node
	I1024 19:32:18.732453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:18.732478       1 main.go:227] handling current node
	I1024 19:32:28.736818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:28.736842       1 main.go:227] handling current node
	I1024 19:32:38.745081       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:38.745110       1 main.go:227] handling current node
	I1024 19:32:48.749557       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:48.749590       1 main.go:227] handling current node
	I1024 19:32:58.761906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:32:58.761934       1 main.go:227] handling current node
	I1024 19:33:08.774139       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:08.774171       1 main.go:227] handling current node
	I1024 19:33:18.786369       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:18.786395       1 main.go:227] handling current node
	I1024 19:33:28.795993       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:33:28.796022       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [af521e6bd1f01ef16bc199e8c62bbb24f2dde4c4d63b7dac2ab2a6c771d49e37] <==
	* E1024 19:25:28.931454       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.236.93:443: connect: connection refused
	W1024 19:25:29.025220       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.236.93:443: connect: connection refused
	E1024 19:25:29.025254       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.236.93:443: connect: connection refused
	I1024 19:25:38.931586       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1024 19:25:49.279694       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.9.255:443: connect: connection refused
	W1024 19:25:49.280188       1 handler_proxy.go:93] no RequestInfo found in the context
	E1024 19:25:49.281860       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1024 19:25:49.282834       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.9.255:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.9.255:443: connect: connection refused
	I1024 19:25:49.283049       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:25:49.357022       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1024 19:26:38.935475       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1024 19:26:52.640193       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400be179b0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b2b2eb0), ResponseWriter:(*httpsnoop.rw)(0x400b2b2eb0), Flusher:(*httpsnoop.rw)(0x400b2b2eb0), CloseNotifier:(*httpsnoop.rw)(0x400b2b2eb0), Pusher:(*httpsnoop.rw)(0x400b2b2eb0)}}, encoder:(*versioning.codec)(0x4008eb6320), memAllocator:(*runtime.Allocator)(0x400871e468)})
	I1024 19:26:54.990169       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.144.179"}
	I1024 19:27:10.881818       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1024 19:27:11.794596       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1024 19:27:11.847875       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1024 19:27:12.884200       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1024 19:27:23.801095       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1024 19:27:24.185066       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.143.9"}
	I1024 19:27:50.313807       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1024 19:29:39.125353       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:29:39.125425       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1024 19:29:39.125874       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1024 19:29:39.125922       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [837e6ec9f669d0246c5fb25fbe894d7e46e7d0df3c628bac3d73f8a23697751b] <==
	* I1024 19:27:22.750490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="9.059µs"
	I1024 19:27:24.321364       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1024 19:27:24.701127       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1024 19:27:24.701166       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:27:25.038869       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1024 19:27:25.038915       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:27:29.205415       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1024 19:27:30.187448       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:27:30.187483       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:27:47.384447       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:27:47.384487       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:28:38.323143       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:28:38.323179       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:29:25.452238       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:29:25.452273       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:30:13.887450       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:30:13.887570       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:30:56.680973       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:30:56.681010       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:31:30.478011       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:31:30.478048       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:32:17.414627       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:32:17.414659       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1024 19:33:04.983264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1024 19:33:04.983300       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [a568bb4094016cf3bdc29215d3ba82d93a8635fac4b869a84c5daf8cc4062fd5] <==
	* I1024 19:25:00.573856       1 server_others.go:69] "Using iptables proxy"
	I1024 19:25:00.903063       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1024 19:25:01.085905       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:25:01.089064       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:25:01.089177       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:25:01.089222       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:25:01.089307       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:25:01.089588       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:25:01.089868       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:25:01.091536       1 config.go:188] "Starting service config controller"
	I1024 19:25:01.091646       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:25:01.091708       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:25:01.091737       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:25:01.092374       1 config.go:315] "Starting node config controller"
	I1024 19:25:01.092434       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:25:01.192530       1 shared_informer.go:318] Caches are synced for node config
	I1024 19:25:01.205129       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:25:01.205244       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [30760e9bfa89f7dc64b4c4327d1c5cb5f05841335f029849c02a9a38f2dd6f4b] <==
	* I1024 19:24:39.725726       1 serving.go:348] Generated self-signed cert in-memory
	I1024 19:24:40.738906       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:24:40.739028       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:24:40.743446       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1024 19:24:40.743563       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1024 19:24:40.743682       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:24:40.743730       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:24:40.743776       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1024 19:24:40.743823       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:24:40.744115       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:24:40.744180       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:24:40.844099       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:24:40.844115       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1024 19:24:40.844139       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 19:32:20 addons-228070 kubelet[1357]: E1024 19:32:20.601498    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:32:20 addons-228070 kubelet[1357]: E1024 19:32:20.603083    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:32:31 addons-228070 kubelet[1357]: E1024 19:32:31.892208    1357 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 24 19:32:31 addons-228070 kubelet[1357]: E1024 19:32:31.892263    1357 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 24 19:32:31 addons-228070 kubelet[1357]: E1024 19:32:31.892366    1357 kuberuntime_manager.go:1256] container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-
rjt45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod-restore_default(40c23b65-3bd3-4526-96a1-30fa85a4b97a): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:32:31 addons-228070 kubelet[1357]: E1024 19:32:31.892406    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:32:33 addons-228070 kubelet[1357]: I1024 19:32:33.601116    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:32:33 addons-228070 kubelet[1357]: E1024 19:32:33.601453    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:32:33 addons-228070 kubelet[1357]: E1024 19:32:33.603230    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:32:42 addons-228070 kubelet[1357]: E1024 19:32:42.786864    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e114325719898e10593aded209ff791bbffb2a0c580eb8698658962a966270e0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e114325719898e10593aded209ff791bbffb2a0c580eb8698658962a966270e0/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:32:42 addons-228070 kubelet[1357]: E1024 19:32:42.794966    1357 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e114325719898e10593aded209ff791bbffb2a0c580eb8698658962a966270e0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e114325719898e10593aded209ff791bbffb2a0c580eb8698658962a966270e0/diff: no such file or directory, extraDiskErr: <nil>
	Oct 24 19:32:45 addons-228070 kubelet[1357]: E1024 19:32:45.602164    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:32:47 addons-228070 kubelet[1357]: E1024 19:32:47.602447    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:32:48 addons-228070 kubelet[1357]: I1024 19:32:48.601509    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:32:48 addons-228070 kubelet[1357]: E1024 19:32:48.602118    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:32:56 addons-228070 kubelet[1357]: E1024 19:32:56.602047    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="d55f9bf6-38ea-4587-adb0-f64601bb7bf1"
	Oct 24 19:32:58 addons-228070 kubelet[1357]: E1024 19:32:58.603906    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:33:03 addons-228070 kubelet[1357]: I1024 19:33:03.601784    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:33:03 addons-228070 kubelet[1357]: E1024 19:33:03.602057    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:33:09 addons-228070 kubelet[1357]: E1024 19:33:09.602626    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:33:17 addons-228070 kubelet[1357]: I1024 19:33:17.600540    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:33:17 addons-228070 kubelet[1357]: E1024 19:33:17.600831    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	Oct 24 19:33:22 addons-228070 kubelet[1357]: E1024 19:33:22.603036    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod-restore" podUID="40c23b65-3bd3-4526-96a1-30fa85a4b97a"
	Oct 24 19:33:30 addons-228070 kubelet[1357]: I1024 19:33:30.601016    1357 scope.go:117] "RemoveContainer" containerID="23e68bc94b71fda166eccc92756ee6c4338e538cf15e0d77076f79b88101ef4c"
	Oct 24 19:33:30 addons-228070 kubelet[1357]: E1024 19:33:30.601291    1357 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f748865c-b605-4237-9edf-8387e9925319)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f748865c-b605-4237-9edf-8387e9925319"
	
	* 
	* ==> storage-provisioner [e7593a21d5782f56f260f444db0a975ea4862e38b2e9fa85e828961fc60380b4] <==
	* I1024 19:25:29.917672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:25:29.941662       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:25:29.941842       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:25:29.948500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:25:29.948749       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28!
	I1024 19:25:29.949655       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5125c969-42f3-4486-b969-fc535e305358", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28 became leader
	I1024 19:25:30.049844       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-228070_d27d2038-845c-4fa4-839b-b1453fb7ec28!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-228070 -n addons-228070
helpers_test.go:261: (dbg) Run:  kubectl --context addons-228070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w: exit status 1 (108.712858ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-228070/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:27:24 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m8lx7 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-m8lx7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/nginx to addons-228070
	  Warning  Failed     5m38s                 kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m23s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m36s (x4 over 6m9s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     112s (x4 over 5m38s)  kubelet            Error: ErrImagePull
	  Warning  Failed     112s (x2 over 3m23s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     98s (x6 over 5m38s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    60s (x9 over 5m38s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod-restore
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-228070/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:27:29 +0000
	Labels:           app=task-pv-pod-restore
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rjt45 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc-restore
	    ReadOnly:   false
	  kube-api-access-rjt45:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  6m3s                default-scheduler  Successfully assigned default/task-pv-pod-restore to addons-228070
	  Warning  Failed     2m22s               kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:61ab60b82e1a8a61f7bbba357cda18588a0f8ba93c3e638e080340d36d6ffc23 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    92s (x4 over 6m4s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     62s (x3 over 5m8s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     62s (x4 over 5m8s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    35s (x7 over 5m7s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     35s (x7 over 5m7s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-grpcs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ht52w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-228070 describe pod nginx task-pv-pod-restore ingress-nginx-admission-create-grpcs ingress-nginx-admission-patch-ht52w: exit status 1
--- FAIL: TestAddons/parallel/CSI (394.54s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3ddb120d-d772-46f6-9652-2cc077794436] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012199881s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-419430 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-419430 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-419430 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-419430 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [721cd2fb-c261-47ac-9d81-6ca7c5afc538] Pending
helpers_test.go:344: "sp-pod" [721cd2fb-c261-47ac-9d81-6ca7c5afc538] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1024 19:41:37.739914 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:42:05.426575 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-419430 -n functional-419430
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-10-24 19:42:58.009509328 +0000 UTC m=+1169.052546624
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-419430 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-419430 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419430/192.168.49.2
Start Time:       Tue, 24 Oct 2023 19:39:57 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxqkq (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-gxqkq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/sp-pod to functional-419430
Warning  Failed     80s (x2 over 2m23s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     80s (x2 over 2m23s)  kubelet            Error: ErrImagePull
Normal   BackOff    65s (x2 over 2m22s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     65s (x2 over 2m22s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    52s (x3 over 3m1s)   kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-419430 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-419430 logs sp-pod -n default: exit status 1 (121.986938ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-419430 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-419430
helpers_test.go:235: (dbg) docker inspect functional-419430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe",
	        "Created": "2023-10-24T19:36:52.43494224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1132785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:36:52.759966845Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/hostname",
	        "HostsPath": "/var/lib/docker/containers/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/hosts",
	        "LogPath": "/var/lib/docker/containers/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe-json.log",
	        "Name": "/functional-419430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-419430:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-419430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c1d7cf55723b045bc4f175e202fa57378b7294a37e44321b2331542596c09efb-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c1d7cf55723b045bc4f175e202fa57378b7294a37e44321b2331542596c09efb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c1d7cf55723b045bc4f175e202fa57378b7294a37e44321b2331542596c09efb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c1d7cf55723b045bc4f175e202fa57378b7294a37e44321b2331542596c09efb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-419430",
	                "Source": "/var/lib/docker/volumes/functional-419430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-419430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-419430",
	                "name.minikube.sigs.k8s.io": "functional-419430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b4c08bab34a544aab22b5c76e62e3107a9e6b55c3cec27ed4bf0cd3a2a91edc0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34220"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34219"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34216"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34218"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34217"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b4c08bab34a5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-419430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "41b471c78ced",
	                        "functional-419430"
	                    ],
	                    "NetworkID": "b72c70d504686fa906a919c220a9e33854e17ea96d14b6c60b66cb1c935f3e41",
	                    "EndpointID": "84e3269bb69761c1f5b122b10af886b8787ec6a1a1ee8d99266435634b1c9282",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-419430 -n functional-419430
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 logs -n 25: (1.862557739s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config  | functional-419430 config unset                                         | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | cpus                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh -n                                               | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | functional-419430 sudo cat                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                               |                   |         |         |                     |                     |
	| config  | functional-419430 config get                                           | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | cpus                                                                   |                   |         |         |                     |                     |
	| license |                                                                        | minikube          | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| cp      | functional-419430 cp                                                   | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | functional-419430:/home/docker/cp-test.txt                             |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd3508738083/001/cp-test.txt             |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh echo                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | hello                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh -n                                               | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | functional-419430 sudo cat                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                               |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh cat                                              | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | /etc/hostname                                                          |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh sudo                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | systemctl is-active docker                                             |                   |         |         |                     |                     |
	| tunnel  | functional-419430 tunnel                                               | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-419430 tunnel                                               | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh     | functional-419430 ssh sudo                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | systemctl is-active containerd                                         |                   |         |         |                     |                     |
	| tunnel  | functional-419430 tunnel                                               | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image load --daemon                                  | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image ls                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| image   | functional-419430 image load --daemon                                  | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image ls                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| image   | functional-419430 image load --daemon                                  | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image ls                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| image   | functional-419430 image save                                           | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image rm                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image ls                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| image   | functional-419430 image load                                           | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image   | functional-419430 image ls                                             | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	| image   | functional-419430 image save --daemon                                  | functional-419430 | jenkins | v1.31.2 | 24 Oct 23 19:39 UTC | 24 Oct 23 19:39 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-419430               |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                   |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:38:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:38:50.947177 1137351 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:38:50.947326 1137351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:38:50.947330 1137351 out.go:309] Setting ErrFile to fd 2...
	I1024 19:38:50.947335 1137351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:38:50.947573 1137351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:38:50.947898 1137351 out.go:303] Setting JSON to false
	I1024 19:38:50.948884 1137351 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":33680,"bootTime":1698142651,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:38:50.948949 1137351 start.go:138] virtualization:  
	I1024 19:38:50.952140 1137351 out.go:177] * [functional-419430] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:38:50.954497 1137351 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:38:50.956516 1137351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:38:50.954724 1137351 notify.go:220] Checking for updates...
	I1024 19:38:50.960506 1137351 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:38:50.962663 1137351 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:38:50.964590 1137351 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:38:50.966685 1137351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:38:50.969405 1137351 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:38:50.969522 1137351 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:38:50.995670 1137351 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:38:50.995777 1137351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:38:51.085855 1137351 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-24 19:38:51.074513131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:38:51.085955 1137351 docker.go:295] overlay module found
	I1024 19:38:51.088044 1137351 out.go:177] * Using the docker driver based on existing profile
	I1024 19:38:51.090127 1137351 start.go:298] selected driver: docker
	I1024 19:38:51.090134 1137351 start.go:902] validating driver "docker" against &{Name:functional-419430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:38:51.090250 1137351 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:38:51.090350 1137351 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:38:51.166271 1137351 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-24 19:38:51.156526516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:38:51.166693 1137351 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:38:51.166761 1137351 cni.go:84] Creating CNI manager for ""
	I1024 19:38:51.166772 1137351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:38:51.166782 1137351 start_flags.go:323] config:
	{Name:functional-419430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:38:51.170400 1137351 out.go:177] * Starting control plane node functional-419430 in cluster functional-419430
	I1024 19:38:51.172391 1137351 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:38:51.174427 1137351 out.go:177] * Pulling base image ...
	I1024 19:38:51.176258 1137351 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:38:51.176309 1137351 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1024 19:38:51.176318 1137351 cache.go:57] Caching tarball of preloaded images
	I1024 19:38:51.176347 1137351 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:38:51.176401 1137351 preload.go:174] Found /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1024 19:38:51.176410 1137351 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 19:38:51.176524 1137351 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/config.json ...
	I1024 19:38:51.194958 1137351 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:38:51.194972 1137351 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:38:51.194995 1137351 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:38:51.195043 1137351 start.go:365] acquiring machines lock for functional-419430: {Name:mk99214e923937e6ef7b49e1696a8403a6cef81e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:51.195120 1137351 start.go:369] acquired machines lock for "functional-419430" in 54.876µs
	I1024 19:38:51.195140 1137351 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:38:51.195145 1137351 fix.go:54] fixHost starting: 
	I1024 19:38:51.195429 1137351 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
	I1024 19:38:51.214013 1137351 fix.go:102] recreateIfNeeded on functional-419430: state=Running err=<nil>
	W1024 19:38:51.214031 1137351 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:38:51.216689 1137351 out.go:177] * Updating the running docker "functional-419430" container ...
	I1024 19:38:51.219199 1137351 machine.go:88] provisioning docker machine ...
	I1024 19:38:51.219222 1137351 ubuntu.go:169] provisioning hostname "functional-419430"
	I1024 19:38:51.219301 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:51.238241 1137351 main.go:141] libmachine: Using SSH client type: native
	I1024 19:38:51.238698 1137351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34220 <nil> <nil>}
	I1024 19:38:51.238708 1137351 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-419430 && echo "functional-419430" | sudo tee /etc/hostname
	I1024 19:38:51.396314 1137351 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-419430
	
	I1024 19:38:51.396380 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:51.415998 1137351 main.go:141] libmachine: Using SSH client type: native
	I1024 19:38:51.416397 1137351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34220 <nil> <nil>}
	I1024 19:38:51.416412 1137351 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-419430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-419430/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-419430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:38:51.558869 1137351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:38:51.558885 1137351 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 19:38:51.558903 1137351 ubuntu.go:177] setting up certificates
	I1024 19:38:51.558911 1137351 provision.go:83] configureAuth start
	I1024 19:38:51.558980 1137351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-419430
	I1024 19:38:51.578263 1137351 provision.go:138] copyHostCerts
	I1024 19:38:51.578320 1137351 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 19:38:51.578348 1137351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 19:38:51.578421 1137351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 19:38:51.578520 1137351 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 19:38:51.578524 1137351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 19:38:51.578547 1137351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 19:38:51.578612 1137351 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 19:38:51.578616 1137351 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 19:38:51.578641 1137351 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 19:38:51.578689 1137351 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.functional-419430 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-419430]
	I1024 19:38:51.993423 1137351 provision.go:172] copyRemoteCerts
	I1024 19:38:51.993482 1137351 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:38:51.993529 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:52.017990 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:38:52.119990 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:38:52.149885 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 19:38:52.178810 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:38:52.208324 1137351 provision.go:86] duration metric: configureAuth took 649.399608ms
	I1024 19:38:52.208340 1137351 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:38:52.208543 1137351 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:38:52.208650 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:52.227704 1137351 main.go:141] libmachine: Using SSH client type: native
	I1024 19:38:52.228121 1137351 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34220 <nil> <nil>}
	I1024 19:38:52.228134 1137351 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:38:57.702669 1137351 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:38:57.702685 1137351 machine.go:91] provisioned docker machine in 6.483470914s
	I1024 19:38:57.702699 1137351 start.go:300] post-start starting for "functional-419430" (driver="docker")
	I1024 19:38:57.702708 1137351 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:38:57.702790 1137351 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:38:57.702836 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:57.729894 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:38:57.867186 1137351 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:38:57.873516 1137351 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:38:57.873541 1137351 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:38:57.873550 1137351 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:38:57.873557 1137351 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:38:57.873566 1137351 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 19:38:57.873615 1137351 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 19:38:57.873695 1137351 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 19:38:57.873790 1137351 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/test/nested/copy/1117634/hosts -> hosts in /etc/test/nested/copy/1117634
	I1024 19:38:57.873834 1137351 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1117634
	I1024 19:38:57.886526 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:38:57.929999 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/test/nested/copy/1117634/hosts --> /etc/test/nested/copy/1117634/hosts (40 bytes)
	I1024 19:38:57.964859 1137351 start.go:303] post-start completed in 262.145468ms
	I1024 19:38:57.964942 1137351 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:38:57.964980 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:57.983310 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:38:58.080835 1137351 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:38:58.087421 1137351 fix.go:56] fixHost completed within 6.89226696s
	I1024 19:38:58.087437 1137351 start.go:83] releasing machines lock for "functional-419430", held for 6.892308896s
	I1024 19:38:58.087520 1137351 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-419430
	I1024 19:38:58.106770 1137351 ssh_runner.go:195] Run: cat /version.json
	I1024 19:38:58.106816 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:58.106850 1137351 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:38:58.106919 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:38:58.129480 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:38:58.143313 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:38:58.226094 1137351 ssh_runner.go:195] Run: systemctl --version
	I1024 19:38:58.363813 1137351 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:38:58.513157 1137351 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:38:58.518741 1137351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:38:58.529312 1137351 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:38:58.529382 1137351 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:38:58.540116 1137351 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1024 19:38:58.540129 1137351 start.go:472] detecting cgroup driver to use...
	I1024 19:38:58.540158 1137351 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:38:58.540210 1137351 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:38:58.554665 1137351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:38:58.568001 1137351 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:38:58.568058 1137351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:38:58.583657 1137351 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:38:58.597244 1137351 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:38:58.726790 1137351 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:38:58.884002 1137351 docker.go:214] disabling docker service ...
	I1024 19:38:58.884055 1137351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:38:58.899075 1137351 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:38:58.913239 1137351 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:38:59.044883 1137351 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:38:59.171876 1137351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:38:59.185410 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:38:59.205111 1137351 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 19:38:59.205176 1137351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:59.217539 1137351 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:38:59.217620 1137351 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:59.230283 1137351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:59.241806 1137351 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:38:59.253638 1137351 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:38:59.264836 1137351 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:38:59.275208 1137351 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:38:59.286096 1137351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:38:59.409708 1137351 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:38:59.574890 1137351 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:38:59.574960 1137351 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:38:59.580091 1137351 start.go:540] Will wait 60s for crictl version
	I1024 19:38:59.580146 1137351 ssh_runner.go:195] Run: which crictl
	I1024 19:38:59.584297 1137351 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:38:59.631256 1137351 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:38:59.631328 1137351 ssh_runner.go:195] Run: crio --version
	I1024 19:38:59.686886 1137351 ssh_runner.go:195] Run: crio --version
	I1024 19:38:59.730504 1137351 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 19:38:59.732604 1137351 cli_runner.go:164] Run: docker network inspect functional-419430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:38:59.751321 1137351 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:38:59.757855 1137351 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1024 19:38:59.759730 1137351 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:38:59.759797 1137351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:38:59.814344 1137351 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:38:59.814356 1137351 crio.go:415] Images already preloaded, skipping extraction
	I1024 19:38:59.814414 1137351 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:38:59.858499 1137351 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 19:38:59.858510 1137351 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:38:59.858582 1137351 ssh_runner.go:195] Run: crio config
	I1024 19:38:59.911717 1137351 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1024 19:38:59.911746 1137351 cni.go:84] Creating CNI manager for ""
	I1024 19:38:59.911762 1137351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:38:59.911773 1137351 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:38:59.911794 1137351 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-419430 NodeName:functional-419430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:38:59.911930 1137351 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-419430"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:38:59.911996 1137351 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-419430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1024 19:38:59.912066 1137351 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:38:59.922821 1137351 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:38:59.922889 1137351 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:38:59.933007 1137351 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I1024 19:38:59.953598 1137351 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:38:59.974362 1137351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I1024 19:38:59.995058 1137351 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:38:59.999678 1137351 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430 for IP: 192.168.49.2
	I1024 19:38:59.999713 1137351 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:38:59.999894 1137351 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 19:38:59.999936 1137351 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 19:39:00.000011 1137351 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.key
	I1024 19:39:00.000065 1137351 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/apiserver.key.dd3b5fb2
	I1024 19:39:00.000103 1137351 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/proxy-client.key
	I1024 19:39:00.000238 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem (1338 bytes)
	W1024 19:39:00.000274 1137351 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634_empty.pem, impossibly tiny 0 bytes
	I1024 19:39:00.000282 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:39:00.000309 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:39:00.000331 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:39:00.000353 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 19:39:00.000398 1137351 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:39:00.001059 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:39:00.069196 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:39:00.104115 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:39:00.137431 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1024 19:39:00.168580 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:39:00.200191 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:39:00.230692 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:39:00.262286 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:39:00.294418 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:39:00.323500 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem --> /usr/share/ca-certificates/1117634.pem (1338 bytes)
	I1024 19:39:00.352550 1137351 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /usr/share/ca-certificates/11176342.pem (1708 bytes)
	I1024 19:39:00.382188 1137351 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:39:00.405166 1137351 ssh_runner.go:195] Run: openssl version
	I1024 19:39:00.412571 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:39:00.425399 1137351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:39:00.430419 1137351 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:39:00.430473 1137351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:39:00.439861 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:39:00.451749 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117634.pem && ln -fs /usr/share/ca-certificates/1117634.pem /etc/ssl/certs/1117634.pem"
	I1024 19:39:00.463837 1137351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117634.pem
	I1024 19:39:00.468547 1137351 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 19:39:00.468601 1137351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117634.pem
	I1024 19:39:00.477372 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117634.pem /etc/ssl/certs/51391683.0"
	I1024 19:39:00.488588 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176342.pem && ln -fs /usr/share/ca-certificates/11176342.pem /etc/ssl/certs/11176342.pem"
	I1024 19:39:00.500960 1137351 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176342.pem
	I1024 19:39:00.506057 1137351 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 19:39:00.506130 1137351 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176342.pem
	I1024 19:39:00.515056 1137351 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11176342.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:39:00.526190 1137351 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:39:00.530656 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 19:39:00.540161 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 19:39:00.549249 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 19:39:00.557578 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 19:39:00.566067 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 19:39:00.574862 1137351 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 19:39:00.583574 1137351 kubeadm.go:404] StartCluster: {Name:functional-419430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:39:00.583665 1137351 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:39:00.583728 1137351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:39:00.625607 1137351 cri.go:89] found id: "b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4"
	I1024 19:39:00.625619 1137351 cri.go:89] found id: "868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571"
	I1024 19:39:00.625623 1137351 cri.go:89] found id: "7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d"
	I1024 19:39:00.625636 1137351 cri.go:89] found id: "daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62"
	I1024 19:39:00.625639 1137351 cri.go:89] found id: "bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba"
	I1024 19:39:00.625645 1137351 cri.go:89] found id: "9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29"
	I1024 19:39:00.625648 1137351 cri.go:89] found id: "363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b"
	I1024 19:39:00.625651 1137351 cri.go:89] found id: "61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8"
	I1024 19:39:00.625655 1137351 cri.go:89] found id: ""
	I1024 19:39:00.625705 1137351 ssh_runner.go:195] Run: sudo runc list -f json
	I1024 19:39:00.650733 1137351 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b/userdata","rootfs":"/var/lib/containers/storage/overlay/fa3db1ac7d2ed7c37508fbf81b324a1d3b19125ae38e8f7e3b006a2e913583f4/merged","created":"2023-10-24T19:38:21.906048362Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a3e5c0d9","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-
o.Annotations":"{\"io.kubernetes.container.hash\":\"a3e5c0d9\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:21.684463125Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kube
rnetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-25rb2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d7030cf3-1b2d-4572-97c0-cc4d26319873\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-25rb2_d7030cf3-1b2d-4572-97c0-cc4d26319873/coredns/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fa3db1ac7d2ed7c37508fbf81b324a1d3b19125ae38e8f7e3b006a2e913583f4/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-25rb2_kube-system_d7030cf3-1b2d-4572-97c0-cc4d26319873_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"41d2327acbab2f4e3ec1ac5f
9e614563647afc7d41838462c03e256053818275","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-25rb2_kube-system_d7030cf3-1b2d-4572-97c0-cc4d26319873_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d7030cf3-1b2d-4572-97c0-cc4d26319873/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d7030cf3-1b2d-4572-97c0-cc4d26319873/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d7030cf3-1b2d-4572-97c0-cc4d26319873/containers/coredns/f32d6865\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serv
iceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d7030cf3-1b2d-4572-97c0-cc4d26319873/volumes/kubernetes.io~projected/kube-api-access-99qxb\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-25rb2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d7030cf3-1b2d-4572-97c0-cc4d26319873","kubernetes.io/config.seen":"2023-10-24T19:37:59.698316889Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8/userdata","rootfs":"/var/lib/containers/storage/overlay/00073b18997ca99a24f329f6674971bc38f6157dd2bccdc8b65aeba4d9f34a73/merged","created":"2023-10-24T19:38:21.745924301Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"4
7b81159","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47b81159\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:21.65061523Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.c
ri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-419430\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"686adda5e1d6dec7d6e5b44d44518129\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-functional-419430_686adda5e1d6dec7d6e5b44d44518129/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/00073b18997ca99a24f329f6674971bc38f6157dd2bccdc8b65aeba4d9f34a73/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-419430_kube-system_686adda5e1d6dec7d6e5b44d44518129_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-419430_kube-system_686adda5e1d6dec7d6e5
b44d44518129_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/686adda5e1d6dec7d6e5b44d44518129/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/686adda5e1d6dec7d6e5b44d44518129/containers/etcd/d7522a77\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-419430","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.te
rminationGracePeriod":"30","io.kubernetes.pod.uid":"686adda5e1d6dec7d6e5b44d44518129","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"686adda5e1d6dec7d6e5b44d44518129","kubernetes.io/config.seen":"2023-10-24T19:37:08.030774380Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d/userdata","rootfs":"/var/lib/containers/storage/overlay/2e45be3993ff78d71c7cc65d3e0017f706e7628e827c56cb51c1dc6d8b2be821/merged","created":"2023-10-24T19:38:24.601843868Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"23321d78","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.cont
ainer.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"23321d78\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:24.445657709Z","io.kubernetes.cri-o.Image":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri-o.ImageRef":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-jrfn2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes
.pod.uid\":\"942d037f-d520-428f-aa73-09e53790ee49\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-jrfn2_942d037f-d520-428f-aa73-09e53790ee49/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2e45be3993ff78d71c7cc65d3e0017f706e7628e827c56cb51c1dc6d8b2be821/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-jrfn2_kube-system_942d037f-d520-428f-aa73-09e53790ee49_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-jrfn2_kube-system_942d037f-d520-428f-aa73-09e53790ee49_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.T
TY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/942d037f-d520-428f-aa73-09e53790ee49/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/942d037f-d520-428f-aa73-09e53790ee49/containers/kube-proxy/c4654520\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/942d037f-d520-428f-aa73-09e53790ee49/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_
path\":\"/var/lib/kubelet/pods/942d037f-d520-428f-aa73-09e53790ee49/volumes/kubernetes.io~projected/kube-api-access-jvczb\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-jrfn2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"942d037f-d520-428f-aa73-09e53790ee49","kubernetes.io/config.seen":"2023-10-24T19:37:28.111071000Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571/userdata","rootfs":"/var/lib/containers/storage/overlay/4c577503da1285babe40f14eb04dc52c8f3d32259c151aad10dd3d14587721f5/merged","created":"2023-10-24T19:38:25.729914553Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ece20bf5","io.kubernetes.cont
ainer.name":"storage-provisioner","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ece20bf5\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:25.445291934Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes
.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3ddb120d-d772-46f6-9652-2cc077794436\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_3ddb120d-d772-46f6-9652-2cc077794436/storage-provisioner/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4c577503da1285babe40f14eb04dc52c8f3d32259c151aad10dd3d14587721f5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_3ddb120d-d772-46f6-9652-2cc077794436_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18","io.kubernetes.cri-o.SandboxN
ame":"k8s_storage-provisioner_kube-system_3ddb120d-d772-46f6-9652-2cc077794436_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3ddb120d-d772-46f6-9652-2cc077794436/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3ddb120d-d772-46f6-9652-2cc077794436/containers/storage-provisioner/5b55ebd3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/3ddb120d-d772-46f6-9652-2cc077794436/volumes/kubernetes.io~projected/kube-api-access-kn56g\",\"readonly\":true,\"propaga
tion\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3ddb120d-d772-46f6-9652-2cc077794436","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-10-24T19:37:59.690311526Z","kuberne
tes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29/userdata","rootfs":"/var/lib/containers/storage/overlay/a9a4cd1853fa0b7f295a92872bef0cef10640fd98c21165202b72953f0d64d41/merged","created":"2023-10-24T19:38:21.899337993Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"de3a6ef5","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"de3a6ef5\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy
\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:21.735909692Z","io.kubernetes.cri-o.Image":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri-o.ImageRef":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-419430\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c0556f42e2ef341aa712cfffa7ab6456\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-419430_c0556f42e2ef341aa712cfffa7ab6456/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}
","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/a9a4cd1853fa0b7f295a92872bef0cef10640fd98c21165202b72953f0d64d41/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-419430_kube-system_c0556f42e2ef341aa712cfffa7ab6456_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-419430_kube-system_c0556f42e2ef341aa712cfffa7ab6456_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c0556f42e2ef341aa712cfffa7ab6456/containers/kube-apiserver/1ae3b68d\",\"readonly\":false,\"propagat
ion\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c0556f42e2ef341aa712cfffa7ab6456/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver
-functional-419430","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c0556f42e2ef341aa712cfffa7ab6456","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"c0556f42e2ef341aa712cfffa7ab6456","kubernetes.io/config.seen":"2023-10-24T19:37:08.030783135Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4/userdata","rootfs":"/var/lib/containers/storage/overlay/b9546d9d9b5d787c31194e07e0004ba5adaaac85ba905aa32fd8bfbe99f71a27/merged","created":"2023-10-24T19:38:26.518382783Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1ddd5040","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1
","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1ddd5040\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:26.442476225Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.k
ubernetes.pod.name\":\"kindnet-l7thg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b38054cf-e4ce-4169-897f-25093931044e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-l7thg_b38054cf-e4ce-4169-897f-25093931044e/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9546d9d9b5d787c31194e07e0004ba5adaaac85ba905aa32fd8bfbe99f71a27/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-l7thg_kube-system_b38054cf-e4ce-4169-897f-25093931044e_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-l7thg_kube-system_b38054cf-e4ce-4169-897f-25093931044e_0","io.kubernetes.cri-o.SeccompProfilePath":"","i
o.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b38054cf-e4ce-4169-897f-25093931044e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b38054cf-e4ce-4169-897f-25093931044e/containers/kindnet-cni/46faaadd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\
"host_path\":\"/var/lib/kubelet/pods/b38054cf-e4ce-4169-897f-25093931044e/volumes/kubernetes.io~projected/kube-api-access-tjr98\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-l7thg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b38054cf-e4ce-4169-897f-25093931044e","kubernetes.io/config.seen":"2023-10-24T19:37:28.063890743Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba/userdata","rootfs":"/var/lib/containers/storage/overlay/fe10be9f00813743dabbc66fa1dd1c106ff33d72650e09083e380ae905dd4edb/merged","created":"2023-10-24T19:38:21.924295803Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"83906433","io.kubernetes.c
ontainer.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"83906433\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:21.743393646Z","io.kubernetes.cri-o.Image":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri-o.ImageRef":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","i
o.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-419430\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"484874b5141f077e819455c7be9ebc7d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-419430_484874b5141f077e819455c7be9ebc7d/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fe10be9f00813743dabbc66fa1dd1c106ff33d72650e09083e380ae905dd4edb/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-419430_kube-system_484874b5141f077e819455c7be9ebc7d_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d349c68613f2e
ff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-419430_kube-system_484874b5141f077e819455c7be9ebc7d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/484874b5141f077e819455c7be9ebc7d/containers/kube-controller-manager/6ea330bb\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/484874b5141f077e819455c7be9ebc7d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagati
on\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-419430","io.kubernetes.pod.namespace":"kube-syste
m","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"484874b5141f077e819455c7be9ebc7d","kubernetes.io/config.hash":"484874b5141f077e819455c7be9ebc7d","kubernetes.io/config.seen":"2023-10-24T19:37:08.030784957Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62/userdata","rootfs":"/var/lib/containers/storage/overlay/ecab30211fee829f518dec48f9d78fd6565f56c3811ecca8fe68cb5b8133ac27/merged","created":"2023-10-24T19:38:21.869376718Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1a68c1c3","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernet
es.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1a68c1c3\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:38:21.771017118Z","io.kubernetes.cri-o.Image":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri-o.ImageRef":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-419430\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a14a4a3120eb45
759fded5a6145bffa2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-419430_a14a4a3120eb45759fded5a6145bffa2/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ecab30211fee829f518dec48f9d78fd6565f56c3811ecca8fe68cb5b8133ac27/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-419430_kube-system_a14a4a3120eb45759fded5a6145bffa2_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-419430_kube-system_a14a4a3120eb45759fded5a6145bffa2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.k
ubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a14a4a3120eb45759fded5a6145bffa2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a14a4a3120eb45759fded5a6145bffa2/containers/kube-scheduler/798c8e7f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-419430","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a14a4a3120eb45759fded5a6145bffa2","kubernetes.io/config.hash":"a14a4a3120eb45759fded5a6145bffa2","kubernetes.io/config.seen":"2023-10-24T19:37:08.030786417Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1024 19:39:00.651320 1137351 cri.go:126] list returned 8 containers
	I1024 19:39:00.651328 1137351 cri.go:129] container: {ID:363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b Status:stopped}
	I1024 19:39:00.651344 1137351 cri.go:135] skipping {363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651353 1137351 cri.go:129] container: {ID:61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8 Status:stopped}
	I1024 19:39:00.651360 1137351 cri.go:135] skipping {61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8 stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651366 1137351 cri.go:129] container: {ID:7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d Status:stopped}
	I1024 19:39:00.651374 1137351 cri.go:135] skipping {7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651379 1137351 cri.go:129] container: {ID:868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571 Status:stopped}
	I1024 19:39:00.651384 1137351 cri.go:135] skipping {868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571 stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651389 1137351 cri.go:129] container: {ID:9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29 Status:stopped}
	I1024 19:39:00.651395 1137351 cri.go:135] skipping {9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29 stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651399 1137351 cri.go:129] container: {ID:b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4 Status:stopped}
	I1024 19:39:00.651405 1137351 cri.go:135] skipping {b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4 stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651410 1137351 cri.go:129] container: {ID:bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba Status:stopped}
	I1024 19:39:00.651417 1137351 cri.go:135] skipping {bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651422 1137351 cri.go:129] container: {ID:daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62 Status:stopped}
	I1024 19:39:00.651427 1137351 cri.go:135] skipping {daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62 stopped}: state = "stopped", want "paused"
	I1024 19:39:00.651482 1137351 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:39:00.661720 1137351 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 19:39:00.661754 1137351 kubeadm.go:636] restartCluster start
	I1024 19:39:00.661808 1137351 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 19:39:00.671930 1137351 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:39:00.672488 1137351 kubeconfig.go:92] found "functional-419430" server: "https://192.168.49.2:8441"
	I1024 19:39:00.674270 1137351 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 19:39:00.684410 1137351 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-10-24 19:36:59.373864375 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-10-24 19:38:59.989315307 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1024 19:39:00.684420 1137351 kubeadm.go:1128] stopping kube-system containers ...
	I1024 19:39:00.684429 1137351 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1024 19:39:00.684482 1137351 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:39:00.730011 1137351 cri.go:89] found id: "b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4"
	I1024 19:39:00.730022 1137351 cri.go:89] found id: "868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571"
	I1024 19:39:00.730028 1137351 cri.go:89] found id: "7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d"
	I1024 19:39:00.730032 1137351 cri.go:89] found id: "daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62"
	I1024 19:39:00.730035 1137351 cri.go:89] found id: "bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba"
	I1024 19:39:00.730038 1137351 cri.go:89] found id: "9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29"
	I1024 19:39:00.730043 1137351 cri.go:89] found id: "363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b"
	I1024 19:39:00.730046 1137351 cri.go:89] found id: "61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8"
	I1024 19:39:00.730049 1137351 cri.go:89] found id: ""
	I1024 19:39:00.730054 1137351 cri.go:234] Stopping containers: [b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4 868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571 7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62 bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba 9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29 363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b 61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8]
	I1024 19:39:00.730105 1137351 ssh_runner.go:195] Run: which crictl
	I1024 19:39:00.734427 1137351 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4 868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571 7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62 bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba 9d1e21c8ab2f6ed2e1a5b9ce7998add13888d0e0cf8e662d3aa0f89002f99b29 363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b 61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8
	I1024 19:39:00.804849 1137351 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 19:39:00.905163 1137351 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:39:00.916345 1137351 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Oct 24 19:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 24 19:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 24 19:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 24 19:37 /etc/kubernetes/scheduler.conf
	
	I1024 19:39:00.916415 1137351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1024 19:39:00.928080 1137351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1024 19:39:00.939898 1137351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1024 19:39:00.951081 1137351 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:39:00.951428 1137351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1024 19:39:00.961658 1137351 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1024 19:39:00.972578 1137351 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:39:00.972636 1137351 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1024 19:39:00.983288 1137351 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:39:00.994718 1137351 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 19:39:00.994732 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:01.059627 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:02.932264 1137351 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.872607752s)
	I1024 19:39:02.932285 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:03.151196 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:03.241197 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:03.341142 1137351 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:39:03.341205 1137351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:39:03.354791 1137351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:39:03.886649 1137351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:39:04.386691 1137351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:39:04.414503 1137351 api_server.go:72] duration metric: took 1.073374288s to wait for apiserver process to appear ...
	I1024 19:39:04.414517 1137351 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:39:04.414532 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:08.167112 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:39:08.167131 1137351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:39:08.167139 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:08.249867 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:39:08.249883 1137351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:39:08.750504 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:08.759559 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:39:08.759587 1137351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:39:09.250879 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:09.260226 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:39:09.260244 1137351 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:39:09.750828 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:09.760765 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1024 19:39:09.777351 1137351 api_server.go:141] control plane version: v1.28.3
	I1024 19:39:09.777369 1137351 api_server.go:131] duration metric: took 5.362846452s to wait for apiserver health ...
	I1024 19:39:09.777378 1137351 cni.go:84] Creating CNI manager for ""
	I1024 19:39:09.777384 1137351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:39:09.780503 1137351 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:39:09.782935 1137351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:39:09.793483 1137351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 19:39:09.793495 1137351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:39:09.833190 1137351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:39:10.781765 1137351 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:39:10.790341 1137351 system_pods.go:59] 8 kube-system pods found
	I1024 19:39:10.790363 1137351 system_pods.go:61] "coredns-5dd5756b68-25rb2" [d7030cf3-1b2d-4572-97c0-cc4d26319873] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:39:10.790372 1137351 system_pods.go:61] "etcd-functional-419430" [5fb1c255-8589-415e-ab26-8b9bb326b0c7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:39:10.790377 1137351 system_pods.go:61] "kindnet-l7thg" [b38054cf-e4ce-4169-897f-25093931044e] Running
	I1024 19:39:10.790384 1137351 system_pods.go:61] "kube-apiserver-functional-419430" [7277a777-79c2-436a-b9e8-c6136634a4b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:39:10.790391 1137351 system_pods.go:61] "kube-controller-manager-functional-419430" [bb0474e6-3745-4545-86df-3e5391fda1f5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 19:39:10.790401 1137351 system_pods.go:61] "kube-proxy-jrfn2" [942d037f-d520-428f-aa73-09e53790ee49] Running
	I1024 19:39:10.790410 1137351 system_pods.go:61] "kube-scheduler-functional-419430" [70a1dd7b-5bfb-42fd-bbe0-22531612e402] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 19:39:10.790419 1137351 system_pods.go:61] "storage-provisioner" [3ddb120d-d772-46f6-9652-2cc077794436] Running
	I1024 19:39:10.790424 1137351 system_pods.go:74] duration metric: took 8.649954ms to wait for pod list to return data ...
	I1024 19:39:10.790434 1137351 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:39:10.793910 1137351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:39:10.793926 1137351 node_conditions.go:123] node cpu capacity is 2
	I1024 19:39:10.793936 1137351 node_conditions.go:105] duration metric: took 3.497379ms to run NodePressure ...
	I1024 19:39:10.793951 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:39:10.982988 1137351 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1024 19:39:10.987957 1137351 kubeadm.go:787] kubelet initialised
	I1024 19:39:10.987966 1137351 kubeadm.go:788] duration metric: took 4.966771ms waiting for restarted kubelet to initialise ...
	I1024 19:39:10.987977 1137351 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:39:10.994134 1137351 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:13.015013 1137351 pod_ready.go:102] pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace has status "Ready":"False"
	I1024 19:39:15.514248 1137351 pod_ready.go:92] pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:15.514260 1137351 pod_ready.go:81] duration metric: took 4.520112724s waiting for pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:15.514275 1137351 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:17.536106 1137351 pod_ready.go:102] pod "etcd-functional-419430" in "kube-system" namespace has status "Ready":"False"
	I1024 19:39:18.532731 1137351 pod_ready.go:92] pod "etcd-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:18.532742 1137351 pod_ready.go:81] duration metric: took 3.018460348s waiting for pod "etcd-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:18.532754 1137351 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:18.542291 1137351 pod_ready.go:92] pod "kube-apiserver-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:18.542302 1137351 pod_ready.go:81] duration metric: took 9.540928ms waiting for pod "kube-apiserver-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:18.542312 1137351 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.060557 1137351 pod_ready.go:92] pod "kube-controller-manager-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:19.060569 1137351 pod_ready.go:81] duration metric: took 518.250139ms waiting for pod "kube-controller-manager-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.060579 1137351 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jrfn2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.185985 1137351 pod_ready.go:92] pod "kube-proxy-jrfn2" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:19.185995 1137351 pod_ready.go:81] duration metric: took 125.410954ms waiting for pod "kube-proxy-jrfn2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.186005 1137351 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.585929 1137351 pod_ready.go:92] pod "kube-scheduler-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:19.585940 1137351 pod_ready.go:81] duration metric: took 399.929032ms waiting for pod "kube-scheduler-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:19.585950 1137351 pod_ready.go:38] duration metric: took 8.597965471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:39:19.585967 1137351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:39:19.594989 1137351 ops.go:34] apiserver oom_adj: -16
	I1024 19:39:19.595004 1137351 kubeadm.go:640] restartCluster took 18.933244429s
	I1024 19:39:19.595011 1137351 kubeadm.go:406] StartCluster complete in 19.01144709s
	I1024 19:39:19.595030 1137351 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:39:19.595110 1137351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:39:19.595871 1137351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:39:19.596128 1137351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:39:19.596451 1137351 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:39:19.596579 1137351 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:39:19.596673 1137351 addons.go:69] Setting storage-provisioner=true in profile "functional-419430"
	I1024 19:39:19.596686 1137351 addons.go:231] Setting addon storage-provisioner=true in "functional-419430"
	W1024 19:39:19.596691 1137351 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:39:19.596750 1137351 host.go:66] Checking if "functional-419430" exists ...
	I1024 19:39:19.597082 1137351 addons.go:69] Setting default-storageclass=true in profile "functional-419430"
	I1024 19:39:19.597098 1137351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-419430"
	I1024 19:39:19.597149 1137351 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
	I1024 19:39:19.597358 1137351 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
	I1024 19:39:19.604042 1137351 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-419430" context rescaled to 1 replicas
	I1024 19:39:19.604069 1137351 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:39:19.607268 1137351 out.go:177] * Verifying Kubernetes components...
	I1024 19:39:19.621633 1137351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:39:19.630061 1137351 addons.go:231] Setting addon default-storageclass=true in "functional-419430"
	W1024 19:39:19.630071 1137351 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:39:19.630092 1137351 host.go:66] Checking if "functional-419430" exists ...
	I1024 19:39:19.630528 1137351 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
	I1024 19:39:19.667050 1137351 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:39:19.671162 1137351 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:39:19.671175 1137351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:39:19.671240 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:39:19.672973 1137351 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:39:19.672983 1137351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:39:19.673039 1137351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
	I1024 19:39:19.702668 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:39:19.719054 1137351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
	I1024 19:39:19.754628 1137351 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:39:19.754662 1137351 node_ready.go:35] waiting up to 6m0s for node "functional-419430" to be "Ready" ...
	I1024 19:39:19.785322 1137351 node_ready.go:49] node "functional-419430" has status "Ready":"True"
	I1024 19:39:19.785333 1137351 node_ready.go:38] duration metric: took 30.660077ms waiting for node "functional-419430" to be "Ready" ...
	I1024 19:39:19.785341 1137351 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:39:19.839768 1137351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:39:19.857891 1137351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:39:19.990244 1137351 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:20.311756 1137351 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:39:20.313816 1137351 addons.go:502] enable addons completed in 717.232751ms: enabled=[storage-provisioner default-storageclass]
	I1024 19:39:20.385372 1137351 pod_ready.go:92] pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:20.385383 1137351 pod_ready.go:81] duration metric: took 395.12578ms waiting for pod "coredns-5dd5756b68-25rb2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:20.385394 1137351 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:20.785878 1137351 pod_ready.go:92] pod "etcd-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:20.785889 1137351 pod_ready.go:81] duration metric: took 400.488864ms waiting for pod "etcd-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:20.785901 1137351 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.191575 1137351 pod_ready.go:92] pod "kube-apiserver-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:21.191586 1137351 pod_ready.go:81] duration metric: took 405.679101ms waiting for pod "kube-apiserver-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.191595 1137351 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.585853 1137351 pod_ready.go:92] pod "kube-controller-manager-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:21.585865 1137351 pod_ready.go:81] duration metric: took 394.260306ms waiting for pod "kube-controller-manager-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.585875 1137351 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jrfn2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.985553 1137351 pod_ready.go:92] pod "kube-proxy-jrfn2" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:21.985563 1137351 pod_ready.go:81] duration metric: took 399.680983ms waiting for pod "kube-proxy-jrfn2" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:21.985572 1137351 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:22.385441 1137351 pod_ready.go:92] pod "kube-scheduler-functional-419430" in "kube-system" namespace has status "Ready":"True"
	I1024 19:39:22.385451 1137351 pod_ready.go:81] duration metric: took 399.873302ms waiting for pod "kube-scheduler-functional-419430" in "kube-system" namespace to be "Ready" ...
	I1024 19:39:22.385461 1137351 pod_ready.go:38] duration metric: took 2.600111524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:39:22.385475 1137351 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:39:22.385539 1137351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:39:22.399029 1137351 api_server.go:72] duration metric: took 2.794932655s to wait for apiserver process to appear ...
	I1024 19:39:22.399042 1137351 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:39:22.399058 1137351 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1024 19:39:22.407709 1137351 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1024 19:39:22.408920 1137351 api_server.go:141] control plane version: v1.28.3
	I1024 19:39:22.408957 1137351 api_server.go:131] duration metric: took 9.884213ms to wait for apiserver health ...
	I1024 19:39:22.408964 1137351 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:39:22.588493 1137351 system_pods.go:59] 8 kube-system pods found
	I1024 19:39:22.588508 1137351 system_pods.go:61] "coredns-5dd5756b68-25rb2" [d7030cf3-1b2d-4572-97c0-cc4d26319873] Running
	I1024 19:39:22.588512 1137351 system_pods.go:61] "etcd-functional-419430" [5fb1c255-8589-415e-ab26-8b9bb326b0c7] Running
	I1024 19:39:22.588516 1137351 system_pods.go:61] "kindnet-l7thg" [b38054cf-e4ce-4169-897f-25093931044e] Running
	I1024 19:39:22.588521 1137351 system_pods.go:61] "kube-apiserver-functional-419430" [7277a777-79c2-436a-b9e8-c6136634a4b7] Running
	I1024 19:39:22.588525 1137351 system_pods.go:61] "kube-controller-manager-functional-419430" [bb0474e6-3745-4545-86df-3e5391fda1f5] Running
	I1024 19:39:22.588531 1137351 system_pods.go:61] "kube-proxy-jrfn2" [942d037f-d520-428f-aa73-09e53790ee49] Running
	I1024 19:39:22.588535 1137351 system_pods.go:61] "kube-scheduler-functional-419430" [70a1dd7b-5bfb-42fd-bbe0-22531612e402] Running
	I1024 19:39:22.588538 1137351 system_pods.go:61] "storage-provisioner" [3ddb120d-d772-46f6-9652-2cc077794436] Running
	I1024 19:39:22.588543 1137351 system_pods.go:74] duration metric: took 179.575144ms to wait for pod list to return data ...
	I1024 19:39:22.588550 1137351 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:39:22.785650 1137351 default_sa.go:45] found service account: "default"
	I1024 19:39:22.785663 1137351 default_sa.go:55] duration metric: took 197.10802ms for default service account to be created ...
	I1024 19:39:22.785671 1137351 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:39:22.989237 1137351 system_pods.go:86] 8 kube-system pods found
	I1024 19:39:22.989251 1137351 system_pods.go:89] "coredns-5dd5756b68-25rb2" [d7030cf3-1b2d-4572-97c0-cc4d26319873] Running
	I1024 19:39:22.989256 1137351 system_pods.go:89] "etcd-functional-419430" [5fb1c255-8589-415e-ab26-8b9bb326b0c7] Running
	I1024 19:39:22.989260 1137351 system_pods.go:89] "kindnet-l7thg" [b38054cf-e4ce-4169-897f-25093931044e] Running
	I1024 19:39:22.989265 1137351 system_pods.go:89] "kube-apiserver-functional-419430" [7277a777-79c2-436a-b9e8-c6136634a4b7] Running
	I1024 19:39:22.989269 1137351 system_pods.go:89] "kube-controller-manager-functional-419430" [bb0474e6-3745-4545-86df-3e5391fda1f5] Running
	I1024 19:39:22.989273 1137351 system_pods.go:89] "kube-proxy-jrfn2" [942d037f-d520-428f-aa73-09e53790ee49] Running
	I1024 19:39:22.989277 1137351 system_pods.go:89] "kube-scheduler-functional-419430" [70a1dd7b-5bfb-42fd-bbe0-22531612e402] Running
	I1024 19:39:22.989281 1137351 system_pods.go:89] "storage-provisioner" [3ddb120d-d772-46f6-9652-2cc077794436] Running
	I1024 19:39:22.989287 1137351 system_pods.go:126] duration metric: took 203.611825ms to wait for k8s-apps to be running ...
	I1024 19:39:22.989293 1137351 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:39:22.989351 1137351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:39:23.003417 1137351 system_svc.go:56] duration metric: took 14.11091ms WaitForService to wait for kubelet.
	I1024 19:39:23.003433 1137351 kubeadm.go:581] duration metric: took 3.399344244s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:39:23.003452 1137351 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:39:23.194245 1137351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:39:23.194259 1137351 node_conditions.go:123] node cpu capacity is 2
	I1024 19:39:23.194270 1137351 node_conditions.go:105] duration metric: took 190.814248ms to run NodePressure ...
	I1024 19:39:23.194281 1137351 start.go:228] waiting for startup goroutines ...
	I1024 19:39:23.194286 1137351 start.go:233] waiting for cluster config update ...
	I1024 19:39:23.194295 1137351 start.go:242] writing updated cluster config ...
	I1024 19:39:23.194587 1137351 ssh_runner.go:195] Run: rm -f paused
	I1024 19:39:23.259337 1137351 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:39:23.262214 1137351 out.go:177] * Done! kubectl is now configured to use "functional-419430" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:40:35 functional-419430 crio[4206]: time="2023-10-24 19:40:35.996845630Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 24 19:40:36 functional-419430 crio[4206]: time="2023-10-24 19:40:36.607153564Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7c840c9d-7f57-4fde-8668-ab5b2241cb62 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:40:36 functional-419430 crio[4206]: time="2023-10-24 19:40:36.607411606Z" level=info msg="Image docker.io/nginx:latest not found" id=7c840c9d-7f57-4fde-8668-ab5b2241cb62 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:40:48 functional-419430 crio[4206]: time="2023-10-24 19:40:48.282822754Z" level=info msg="Checking image status: docker.io/nginx:latest" id=811f223b-d0f8-441d-a24f-fe07436f7e14 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:40:48 functional-419430 crio[4206]: time="2023-10-24 19:40:48.283058429Z" level=info msg="Image docker.io/nginx:latest not found" id=811f223b-d0f8-441d-a24f-fe07436f7e14 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:08 functional-419430 crio[4206]: time="2023-10-24 19:41:08.407235153Z" level=info msg="Pulling image: docker.io/nginx:latest" id=14a5d6cc-c2d0-49b8-98aa-fc67e6258f41 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:41:08 functional-419430 crio[4206]: time="2023-10-24 19:41:08.409219726Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 24 19:41:20 functional-419430 crio[4206]: time="2023-10-24 19:41:20.282693923Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=31c863f3-ca0a-4ac7-a028-1549a9f0a24a name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:20 functional-419430 crio[4206]: time="2023-10-24 19:41:20.282980625Z" level=info msg="Image docker.io/nginx:alpine not found" id=31c863f3-ca0a-4ac7-a028-1549a9f0a24a name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:34 functional-419430 crio[4206]: time="2023-10-24 19:41:34.282827533Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d2b2296a-5d95-410b-8eb9-5ff7fcfdefbb name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:34 functional-419430 crio[4206]: time="2023-10-24 19:41:34.283068681Z" level=info msg="Image docker.io/nginx:alpine not found" id=d2b2296a-5d95-410b-8eb9-5ff7fcfdefbb name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:38 functional-419430 crio[4206]: time="2023-10-24 19:41:38.691616332Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=6a48af25-53ab-4c16-960e-4bc94105cbf6 name=/runtime.v1.ImageService/PullImage
	Oct 24 19:41:38 functional-419430 crio[4206]: time="2023-10-24 19:41:38.693646977Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Oct 24 19:41:53 functional-419430 crio[4206]: time="2023-10-24 19:41:53.283203009Z" level=info msg="Checking image status: docker.io/nginx:latest" id=1dc540ba-8bbf-405e-a326-8f7a9aac4c39 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:41:53 functional-419430 crio[4206]: time="2023-10-24 19:41:53.283440990Z" level=info msg="Image docker.io/nginx:latest not found" id=1dc540ba-8bbf-405e-a326-8f7a9aac4c39 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:06 functional-419430 crio[4206]: time="2023-10-24 19:42:06.283302642Z" level=info msg="Checking image status: docker.io/nginx:latest" id=270af792-46e0-4798-962e-dec6365b2197 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:06 functional-419430 crio[4206]: time="2023-10-24 19:42:06.283539621Z" level=info msg="Image docker.io/nginx:latest not found" id=270af792-46e0-4798-962e-dec6365b2197 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:08 functional-419430 crio[4206]: time="2023-10-24 19:42:08.984126168Z" level=info msg="Pulling image: docker.io/nginx:latest" id=349e498e-94dd-4cf3-a7a2-06f459dfea4b name=/runtime.v1.ImageService/PullImage
	Oct 24 19:42:08 functional-419430 crio[4206]: time="2023-10-24 19:42:08.985935957Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Oct 24 19:42:24 functional-419430 crio[4206]: time="2023-10-24 19:42:24.283171238Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=231759b4-fd89-4b8e-bffc-ea96043769ca name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:24 functional-419430 crio[4206]: time="2023-10-24 19:42:24.283431733Z" level=info msg="Image docker.io/nginx:alpine not found" id=231759b4-fd89-4b8e-bffc-ea96043769ca name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:38 functional-419430 crio[4206]: time="2023-10-24 19:42:38.283648742Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=afbdb8d0-f552-4173-9d01-0ba646dcbe41 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:38 functional-419430 crio[4206]: time="2023-10-24 19:42:38.283930341Z" level=info msg="Image docker.io/nginx:alpine not found" id=afbdb8d0-f552-4173-9d01-0ba646dcbe41 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:50 functional-419430 crio[4206]: time="2023-10-24 19:42:50.283399108Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=56d448b2-5a81-4ef8-a396-9162fae76527 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 19:42:50 functional-419430 crio[4206]: time="2023-10-24 19:42:50.283645638Z" level=info msg="Image docker.io/nginx:alpine not found" id=56d448b2-5a81-4ef8-a396-9162fae76527 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f5695fc3a6ce8       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   3 minutes ago       Running             kindnet-cni               2                   f960ce271986a       kindnet-l7thg
	c02609495510c       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd   3 minutes ago       Running             kube-proxy                2                   57f72b65f4839       kube-proxy-jrfn2
	9be1f300b3677       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       2                   46a55e016a2d6       storage-provisioner
	3430694a65539       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   3 minutes ago       Running             coredns                   3                   41d2327acbab2       coredns-5dd5756b68-25rb2
	b92f4b24bb880       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16   3 minutes ago       Running             kube-controller-manager   2                   d349c68613f2e       kube-controller-manager-functional-419430
	62310269fce44       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7   3 minutes ago       Running             kube-apiserver            0                   1fcc0dadc819c       kube-apiserver-functional-419430
	3a794418dfef4       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314   3 minutes ago       Running             kube-scheduler            2                   0f4b76c102fb3       kube-scheduler-functional-419430
	25b84f069036e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   3 minutes ago       Running             etcd                      3                   7c380c3624d91       etcd-functional-419430
	b2ef1ca242b59       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago       Exited              kindnet-cni               1                   f960ce271986a       kindnet-l7thg
	868c405ef208d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   4 minutes ago       Exited              storage-provisioner       1                   46a55e016a2d6       storage-provisioner
	7fb4ef232c6d6       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd   4 minutes ago       Exited              kube-proxy                1                   57f72b65f4839       kube-proxy-jrfn2
	daafec952a08d       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314   4 minutes ago       Exited              kube-scheduler            1                   0f4b76c102fb3       kube-scheduler-functional-419430
	bb21a760cec5e       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16   4 minutes ago       Exited              kube-controller-manager   1                   d349c68613f2e       kube-controller-manager-functional-419430
	363d89f45516b       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago       Exited              coredns                   2                   41d2327acbab2       coredns-5dd5756b68-25rb2
	61a499a388ee4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago       Exited              etcd                      2                   7c380c3624d91       etcd-functional-419430
	
	* 
	* ==> coredns [3430694a655398e8b4fbb110c5fc652776ef5304334ff413e87c1a369ce1b17f] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50205 - 57302 "HINFO IN 138589796413853654.4503567613556544834. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023485463s
	
	* 
	* ==> coredns [363d89f45516b6f2a1cd6b46b0ccf5029376508a2e903721b78b93c27c3afa3b] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52927 - 29720 "HINFO IN 5961598288061652946.9000955278027342715. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024676597s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-419430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-419430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=functional-419430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_37_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:37:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-419430
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:42:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:40:09 +0000   Tue, 24 Oct 2023 19:37:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:40:09 +0000   Tue, 24 Oct 2023 19:37:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:40:09 +0000   Tue, 24 Oct 2023 19:37:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:40:09 +0000   Tue, 24 Oct 2023 19:37:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-419430
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce980d4ee9af44969c61edf59a8f3910
	  System UUID:                d36dbf76-97bb-49f9-81c9-8dd19d8f0ce1
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-25rb2                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m31s
	  kube-system                 etcd-functional-419430                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m45s
	  kube-system                 kindnet-l7thg                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m32s
	  kube-system                 kube-apiserver-functional-419430             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-controller-manager-functional-419430    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 kube-proxy-jrfn2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-scheduler-functional-419430             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m44s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m49s                  kube-proxy       
	  Normal   Starting                 4m32s                  kube-proxy       
	  Normal   Starting                 5m30s                  kube-proxy       
	  Normal   Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m44s                  kubelet          Node functional-419430 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m44s                  kubelet          Node functional-419430 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m44s                  kubelet          Node functional-419430 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m32s                  node-controller  Node functional-419430 event: Registered Node functional-419430 in Controller
	  Normal   NodeReady                5m                     kubelet          Node functional-419430 status is now: NodeReady
	  Warning  ContainerGCFailed        4m44s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m22s                  node-controller  Node functional-419430 event: Registered Node functional-419430 in Controller
	  Normal   Starting                 3m56s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m56s (x8 over 3m56s)  kubelet          Node functional-419430 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m56s (x8 over 3m56s)  kubelet          Node functional-419430 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m56s (x8 over 3m56s)  kubelet          Node functional-419430 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m39s                  node-controller  Node functional-419430 event: Registered Node functional-419430 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001113] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000994] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001085] FS-Cache: N-key=[8] '80623b0000000000'
	[  +0.002635] FS-Cache: Duplicate cookie detected
	[  +0.000750] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000978] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000bf36fe5e
	[  +0.001181] FS-Cache: O-key=[8] '80623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000b7ed4e62
	[  +0.001156] FS-Cache: N-key=[8] '80623b0000000000'
	[  +3.138037] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000984] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a1cd37ca
	[  +0.001134] FS-Cache: O-key=[8] '7f623b0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001008] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000f7ef6ada
	[  +0.001075] FS-Cache: N-key=[8] '7f623b0000000000'
	[  +0.302369] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001049] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001121] FS-Cache: O-key=[8] '85623b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c7864bf1
	[  +0.001088] FS-Cache: N-key=[8] '85623b0000000000'
	
	* 
	* ==> etcd [25b84f069036ef680576b424e2ec3fe984df7553b189e9b4b8c7ea905f8cdd33] <==
	* {"level":"info","ts":"2023-10-24T19:39:04.301775Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T19:39:04.301814Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-24T19:39:04.30207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-10-24T19:39:04.30216Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-10-24T19:39:04.305872Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:39:04.305951Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T19:39:04.310011Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T19:39:04.310363Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T19:39:04.310175Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:39:04.310933Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:39:04.310856Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:39:05.979839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-10-24T19:39:05.979984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:39:05.98003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-24T19:39:05.980068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-10-24T19:39:05.980104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-10-24T19:39:05.980146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-10-24T19:39:05.980182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-10-24T19:39:05.985988Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-419430 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:39:05.986223Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:39:05.987232Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:39:05.987407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:39:05.988228Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-24T19:39:05.988348Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:39:05.988424Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [61a499a388ee47a1df2cf494d20aee5f533d8004d9aad77f13d9ce0dca87a0b8] <==
	* {"level":"info","ts":"2023-10-24T19:38:22.158621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T19:38:23.725808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-24T19:38:23.725925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-24T19:38:23.725979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-24T19:38:23.726021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-10-24T19:38:23.726062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-24T19:38:23.72611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-10-24T19:38:23.726145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-10-24T19:38:23.729993Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-419430 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T19:38:23.7301Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:38:23.731043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T19:38:23.731267Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T19:38:23.732164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-24T19:38:23.74722Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T19:38:23.747326Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-24T19:38:52.391103Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-24T19:38:52.391166Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-419430","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-10-24T19:38:52.391233Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:38:52.391309Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:38:52.547844Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-24T19:38:52.547947Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-24T19:38:52.547996Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-10-24T19:38:52.550699Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:38:52.5508Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-10-24T19:38:52.550811Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-419430","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  19:42:59 up  9:25,  0 users,  load average: 0.49, 0.67, 1.11
	Linux functional-419430 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [b2ef1ca242b59f9353104a11763e933441cb44d69524c86e61f58057b752e7c4] <==
	* I1024 19:38:26.610966       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1024 19:38:26.611210       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1024 19:38:26.613574       1 main.go:116] setting mtu 1500 for CNI 
	I1024 19:38:26.613643       1 main.go:146] kindnetd IP family: "ipv4"
	I1024 19:38:26.613678       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1024 19:38:27.014683       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:38:27.014793       1 main.go:227] handling current node
	I1024 19:38:37.118860       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:38:37.118888       1 main.go:227] handling current node
	I1024 19:38:47.125438       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:38:47.125464       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [f5695fc3a6ce863a22afb0f3f2e42933018c902b0d6560f7bbeab5c20f144fea] <==
	* I1024 19:40:50.344620       1 main.go:227] handling current node
	I1024 19:41:00.348565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:00.348594       1 main.go:227] handling current node
	I1024 19:41:10.352285       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:10.405826       1 main.go:227] handling current node
	I1024 19:41:20.414841       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:20.414863       1 main.go:227] handling current node
	I1024 19:41:30.423783       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:30.423815       1 main.go:227] handling current node
	I1024 19:41:40.433452       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:40.433480       1 main.go:227] handling current node
	I1024 19:41:50.445475       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:41:50.445500       1 main.go:227] handling current node
	I1024 19:42:00.454694       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:00.454723       1 main.go:227] handling current node
	I1024 19:42:10.459088       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:10.459115       1 main.go:227] handling current node
	I1024 19:42:20.462618       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:20.462645       1 main.go:227] handling current node
	I1024 19:42:30.471839       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:30.471870       1 main.go:227] handling current node
	I1024 19:42:40.482213       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:40.482262       1 main.go:227] handling current node
	I1024 19:42:50.493107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:42:50.493134       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [62310269fce4419e784a89156622976cf3d4e49979ee76ab7ab010a64b71ffca] <==
	* I1024 19:39:08.085370       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1024 19:39:08.335078       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:39:08.377415       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 19:39:08.377905       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1024 19:39:08.377970       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1024 19:39:08.378296       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 19:39:08.379871       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:39:08.379968       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:39:08.385688       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 19:39:08.385796       1 aggregator.go:166] initial CRD sync complete...
	I1024 19:39:08.385831       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 19:39:08.385863       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 19:39:08.385898       1 cache.go:39] Caches are synced for autoregister controller
	E1024 19:39:08.391753       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1024 19:39:08.403065       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1024 19:39:09.080898       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 19:39:10.773618       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 19:39:10.894843       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 19:39:10.905305       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 19:39:10.965946       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:39:10.973501       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 19:39:27.218646       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 19:39:27.388417       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.219.56"}
	I1024 19:39:27.415854       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:39:34.612232       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.228.25"}
	
	* 
	* ==> kube-controller-manager [b92f4b24bb88084e51f789338a43ade965e607cc3732f594bae67749e4b56a6b] <==
	* I1024 19:39:20.952673       1 shared_informer.go:318] Caches are synced for disruption
	I1024 19:39:20.958100       1 shared_informer.go:318] Caches are synced for job
	I1024 19:39:20.960339       1 shared_informer.go:318] Caches are synced for PVC protection
	I1024 19:39:20.965799       1 shared_informer.go:318] Caches are synced for PV protection
	I1024 19:39:20.965892       1 shared_informer.go:318] Caches are synced for HPA
	I1024 19:39:20.969022       1 shared_informer.go:318] Caches are synced for ephemeral
	I1024 19:39:20.970704       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1024 19:39:20.982098       1 shared_informer.go:318] Caches are synced for taint
	I1024 19:39:20.982230       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1024 19:39:20.982245       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1024 19:39:20.982271       1 taint_manager.go:211] "Sending events to api server"
	I1024 19:39:20.982361       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-419430"
	I1024 19:39:20.982412       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1024 19:39:20.982723       1 event.go:307] "Event occurred" object="functional-419430" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-419430 event: Registered Node functional-419430 in Controller"
	I1024 19:39:20.989883       1 shared_informer.go:318] Caches are synced for attach detach
	I1024 19:39:21.044862       1 shared_informer.go:318] Caches are synced for service account
	I1024 19:39:21.068914       1 shared_informer.go:318] Caches are synced for endpoint
	I1024 19:39:21.078291       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:39:21.100956       1 shared_informer.go:318] Caches are synced for namespace
	I1024 19:39:21.118747       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:39:21.155891       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1024 19:39:21.477596       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:39:21.477638       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1024 19:39:21.507066       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:39:57.331446       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-controller-manager [bb21a760cec5ec4e338393b7f403e4e57fd7a2ffa7580f1c0a68f5b3b4f5c1ba] <==
	* I1024 19:38:37.893879       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1024 19:38:37.893883       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1024 19:38:37.893889       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1024 19:38:37.896120       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1024 19:38:37.902571       1 shared_informer.go:318] Caches are synced for deployment
	I1024 19:38:37.904796       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1024 19:38:37.905945       1 shared_informer.go:318] Caches are synced for GC
	I1024 19:38:37.908203       1 shared_informer.go:318] Caches are synced for persistent volume
	I1024 19:38:37.915401       1 shared_informer.go:318] Caches are synced for TTL
	I1024 19:38:37.918686       1 shared_informer.go:318] Caches are synced for taint
	I1024 19:38:37.918789       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I1024 19:38:37.918875       1 taint_manager.go:211] "Sending events to api server"
	I1024 19:38:37.918792       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1024 19:38:37.919202       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-419430"
	I1024 19:38:37.919284       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1024 19:38:37.919601       1 event.go:307] "Event occurred" object="functional-419430" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-419430 event: Registered Node functional-419430 in Controller"
	I1024 19:38:37.967711       1 shared_informer.go:318] Caches are synced for disruption
	I1024 19:38:37.999379       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1024 19:38:38.023846       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1024 19:38:38.033584       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:38:38.042916       1 shared_informer.go:318] Caches are synced for endpoint
	I1024 19:38:38.078212       1 shared_informer.go:318] Caches are synced for resource quota
	I1024 19:38:38.441299       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:38:38.459405       1 shared_informer.go:318] Caches are synced for garbage collector
	I1024 19:38:38.459454       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [7fb4ef232c6d626387ced541c436535fcb36cd6432bd9933fb03277f20fd242d] <==
	* I1024 19:38:25.108367       1 server_others.go:69] "Using iptables proxy"
	I1024 19:38:26.675545       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1024 19:38:26.853462       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:38:26.859646       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:38:26.859750       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:38:26.859784       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:38:26.859871       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:38:26.860084       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:38:26.863397       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:38:26.864716       1 config.go:188] "Starting service config controller"
	I1024 19:38:26.864831       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:38:26.864888       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:38:26.864921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:38:26.865487       1 config.go:315] "Starting node config controller"
	I1024 19:38:26.880981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:38:26.965448       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:38:26.969803       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:38:26.985816       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c02609495510c59e60a1ea8eb0070ca8564e2df3d964a5efd363dc4ad0ec8418] <==
	* I1024 19:39:09.974475       1 server_others.go:69] "Using iptables proxy"
	I1024 19:39:10.026636       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1024 19:39:10.096781       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 19:39:10.099095       1 server_others.go:152] "Using iptables Proxier"
	I1024 19:39:10.099145       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 19:39:10.099156       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 19:39:10.099238       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 19:39:10.099515       1 server.go:846] "Version info" version="v1.28.3"
	I1024 19:39:10.099531       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:39:10.100568       1 config.go:188] "Starting service config controller"
	I1024 19:39:10.100641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 19:39:10.100661       1 config.go:97] "Starting endpoint slice config controller"
	I1024 19:39:10.100665       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 19:39:10.101219       1 config.go:315] "Starting node config controller"
	I1024 19:39:10.101237       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 19:39:10.200718       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 19:39:10.200771       1 shared_informer.go:318] Caches are synced for service config
	I1024 19:39:10.202089       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [3a794418dfef463fe701d155ce1030f3822bd6efe98706788456b655bf0a874d] <==
	* I1024 19:39:05.456810       1 serving.go:348] Generated self-signed cert in-memory
	W1024 19:39:08.264687       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1024 19:39:08.264821       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1024 19:39:08.264867       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1024 19:39:08.264926       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1024 19:39:08.338588       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:39:08.338622       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:39:08.340504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:39:08.340598       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:39:08.340678       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:39:08.340912       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:39:08.442023       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [daafec952a08dfadb6ec7105f7047ce3a68b7060c200e5896c4b60ea00af4b62] <==
	* I1024 19:38:25.023160       1 serving.go:348] Generated self-signed cert in-memory
	I1024 19:38:26.698829       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1024 19:38:26.698925       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 19:38:26.714757       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1024 19:38:26.714865       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1024 19:38:26.714969       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1024 19:38:26.715007       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:38:26.715053       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1024 19:38:26.715081       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:38:26.722121       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1024 19:38:26.722253       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1024 19:38:26.815804       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1024 19:38:26.815942       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1024 19:38:26.816060       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:38:52.389580       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1024 19:38:52.389619       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1024 19:38:52.389819       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Oct 24 19:41:53 functional-419430 kubelet[4477]: E1024 19:41:53.283966    4477 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="721cd2fb-c261-47ac-9d81-6ca7c5afc538"
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471106    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b: Error finding container 57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b: Status 404 returned error can't find the container with id 57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471291    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2: Error finding container f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2: Status 404 returned error can't find the container with id f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471434    4477 manager.go:1106] Failed to create existing container: /crio-f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2: Error finding container f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2: Status 404 returned error can't find the container with id f960ce271986a2434d060eaf139a468873b8870af17c6f8dc7efd1094ba9b3a2
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471577    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7: Error finding container fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7: Status 404 returned error can't find the container with id fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471728    4477 manager.go:1106] Failed to create existing container: /crio-7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb: Error finding container 7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb: Status 404 returned error can't find the container with id 7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.471939    4477 manager.go:1106] Failed to create existing container: /crio-46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18: Error finding container 46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18: Status 404 returned error can't find the container with id 46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.472163    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899: Error finding container 0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899: Status 404 returned error can't find the container with id 0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.472364    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275: Error finding container 41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275: Status 404 returned error can't find the container with id 41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.472582    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb: Error finding container 7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb: Status 404 returned error can't find the container with id 7c380c3624d910c3dd3bd275a7a9edbed4bf7b7565972887ba8e7003169cafdb
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.472796    4477 manager.go:1106] Failed to create existing container: /crio-dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802: Error finding container dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802: Status 404 returned error can't find the container with id dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.473003    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802: Error finding container dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802: Status 404 returned error can't find the container with id dce2e2bebf5925cc66b1ca60efdf6578cc1f7dcd24f3da499b6641ad36103802
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.473232    4477 manager.go:1106] Failed to create existing container: /crio-fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7: Error finding container fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7: Status 404 returned error can't find the container with id fd9b5af173b5ddb67de7ce3a294dbe44567ccb83da8d9520e00a9dfaefc2d1d7
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.473521    4477 manager.go:1106] Failed to create existing container: /crio-57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b: Error finding container 57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b: Status 404 returned error can't find the container with id 57f72b65f483988c805c302c31c77977e447801cee42216515f1a05e366ab76b
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.473717    4477 manager.go:1106] Failed to create existing container: /crio-41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275: Error finding container 41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275: Status 404 returned error can't find the container with id 41d2327acbab2f4e3ec1ac5f9e614563647afc7d41838462c03e256053818275
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.473946    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18: Error finding container 46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18: Status 404 returned error can't find the container with id 46a55e016a2d670ca13e2af825f7eb42b8348fa07be70e5ed041814f8346ee18
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.474143    4477 manager.go:1106] Failed to create existing container: /crio-0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899: Error finding container 0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899: Status 404 returned error can't find the container with id 0f4b76c102fb3643ce29354acb5caa429dfa568a4a79b81a762eed9a77253899
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.474367    4477 manager.go:1106] Failed to create existing container: /crio-d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723: Error finding container d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723: Status 404 returned error can't find the container with id d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723
	Oct 24 19:42:03 functional-419430 kubelet[4477]: E1024 19:42:03.474596    4477 manager.go:1106] Failed to create existing container: /docker/41b471c78ced1a52c85881339fb44d84396d42b477e6ebc8c459efcfedb43fbe/crio-d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723: Error finding container d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723: Status 404 returned error can't find the container with id d349c68613f2eff3bd402b7b94fb92dbde274bbafe6d9cf971a4493eee5c1723
	Oct 24 19:42:08 functional-419430 kubelet[4477]: E1024 19:42:08.983462    4477 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 24 19:42:08 functional-419430 kubelet[4477]: E1024 19:42:08.983515    4477 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 24 19:42:08 functional-419430 kubelet[4477]: E1024 19:42:08.983699    4477 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjvzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(2b22e31f-7c18-4473-8b02-e1f230eb99d
c): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:42:08 functional-419430 kubelet[4477]: E1024 19:42:08.983744    4477 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="2b22e31f-7c18-4473-8b02-e1f230eb99dc"
	Oct 24 19:42:24 functional-419430 kubelet[4477]: E1024 19:42:24.283722    4477 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="2b22e31f-7c18-4473-8b02-e1f230eb99dc"
	Oct 24 19:42:38 functional-419430 kubelet[4477]: E1024 19:42:38.284337    4477 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="2b22e31f-7c18-4473-8b02-e1f230eb99dc"
	
	* 
	* ==> storage-provisioner [868c405ef208de71bbc2a1456d4d1778b9c02a86bf7a211652f4d92acc869571] <==
	* I1024 19:38:26.573324       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:38:26.670879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:38:26.673576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:38:44.095175       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:38:44.095352       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-419430_9b94da32-5a71-48be-a97a-55470ed2fe83!
	I1024 19:38:44.096011       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c11bad98-7d20-4adf-85ec-2720a06dcf5a", APIVersion:"v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-419430_9b94da32-5a71-48be-a97a-55470ed2fe83 became leader
	I1024 19:38:44.195965       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-419430_9b94da32-5a71-48be-a97a-55470ed2fe83!
	
	* 
	* ==> storage-provisioner [9be1f300b3677e24055eb457d2dd3be2a3605a894c5ba3e931894f31dedc9f9a] <==
	* I1024 19:39:09.777614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:39:09.808878       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:39:09.809200       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:39:27.221930       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:39:27.222112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-419430_b379ab59-0110-49e1-a8d0-5870d30aec3c!
	I1024 19:39:27.222394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c11bad98-7d20-4adf-85ec-2720a06dcf5a", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-419430_b379ab59-0110-49e1-a8d0-5870d30aec3c became leader
	I1024 19:39:27.322632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-419430_b379ab59-0110-49e1-a8d0-5870d30aec3c!
	I1024 19:39:57.331469       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1024 19:39:57.331599       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    8694c3ee-0c4d-4815-92fb-f69aa8d85fd7 368 0 2023-10-24 19:37:29 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-10-24 19:37:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-b4b98100-8017-4642-bc33-152f64d4dc1b &PersistentVolumeClaim{ObjectMeta:{myclaim  default  b4b98100-8017-4642-bc33-152f64d4dc1b 670 0 2023-10-24 19:39:57 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-10-24 19:39:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-10-24 19:39:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1024 19:39:57.332527       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b4b98100-8017-4642-bc33-152f64d4dc1b", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1024 19:39:57.332673       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-b4b98100-8017-4642-bc33-152f64d4dc1b" provisioned
	I1024 19:39:57.332727       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1024 19:39:57.332763       1 volume_store.go:212] Trying to save persistentvolume "pvc-b4b98100-8017-4642-bc33-152f64d4dc1b"
	I1024 19:39:57.351625       1 volume_store.go:219] persistentvolume "pvc-b4b98100-8017-4642-bc33-152f64d4dc1b" saved
	I1024 19:39:57.351802       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b4b98100-8017-4642-bc33-152f64d4dc1b", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b4b98100-8017-4642-bc33-152f64d4dc1b
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-419430 -n functional-419430
helpers_test.go:261: (dbg) Run:  kubectl --context functional-419430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-419430 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-419430 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419430/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:39:34 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zjvzd (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-zjvzd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m26s                default-scheduler  Successfully assigned default/nginx-svc to functional-419430
	  Warning  Failed     2m56s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     113s                 kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     53s (x3 over 2m56s)  kubelet            Error: ErrImagePull
	  Warning  Failed     53s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    23s (x4 over 2m55s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     23s (x4 over 2m55s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    11s (x4 over 3m27s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-419430/192.168.49.2
	Start Time:       Tue, 24 Oct 2023 19:39:57 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gxqkq (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-gxqkq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-419430
	  Warning  Failed     83s (x2 over 2m26s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     83s (x2 over 2m26s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    68s (x2 over 2m25s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     68s (x2 over 2m25s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    55s (x3 over 3m4s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-419430 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [2b22e31f-7c18-4473-8b02-e1f230eb99dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-419430 -n functional-419430
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2023-10-24 19:43:35.03364242 +0000 UTC m=+1206.076679708
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-419430 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-419430 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-419430/192.168.49.2
Start Time:       Tue, 24 Oct 2023 19:39:34 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zjvzd (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-zjvzd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-419430
Warning  Failed     3m30s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m27s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:7a448079db9538619f0705c4390364faae3abefeba6f019f0dba0440251ec07f in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     87s (x3 over 3m30s)  kubelet            Error: ErrImagePull
Warning  Failed     87s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    57s (x4 over 3m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     57s (x4 over 3m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    45s (x4 over 4m1s)   kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-419430 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-419430 logs nginx-svc -n default: exit status 1 (97.930917ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-419430 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (241.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-419430 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.98.228.25   10.98.228.25   80:31066/TCP   5m39s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-989906 addons enable ingress --alsologtostderr -v=5
E1024 19:49:34.140728 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.146070 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.156380 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.176681 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.216973 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.297277 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.457668 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:34.778261 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:35.419428 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:36.699649 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:39.259805 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:44.380912 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:49:54.621848 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:50:15.102306 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:50:56.062487 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 19:51:37.740371 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:52:17.983604 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-989906 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m1.060859162s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:46:56.415346 1148549 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:46:56.416511 1148549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:46:56.416550 1148549 out.go:309] Setting ErrFile to fd 2...
	I1024 19:46:56.416573 1148549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:46:56.416867 1148549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:46:56.417268 1148549 mustload.go:65] Loading cluster: ingress-addon-legacy-989906
	I1024 19:46:56.417817 1148549 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:46:56.417876 1148549 addons.go:594] checking whether the cluster is paused
	I1024 19:46:56.418015 1148549 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:46:56.418044 1148549 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:56.418567 1148549 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:56.440966 1148549 ssh_runner.go:195] Run: systemctl --version
	I1024 19:46:56.441027 1148549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:56.463368 1148549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:56.559573 1148549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:46:56.559653 1148549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:46:56.625860 1148549 cri.go:89] found id: "df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3"
	I1024 19:46:56.625878 1148549 cri.go:89] found id: "a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae"
	I1024 19:46:56.625885 1148549 cri.go:89] found id: "f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093"
	I1024 19:46:56.625889 1148549 cri.go:89] found id: "efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865"
	I1024 19:46:56.625893 1148549 cri.go:89] found id: "c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c"
	I1024 19:46:56.625898 1148549 cri.go:89] found id: "d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9"
	I1024 19:46:56.625902 1148549 cri.go:89] found id: "a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756"
	I1024 19:46:56.625906 1148549 cri.go:89] found id: "e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c"
	I1024 19:46:56.625910 1148549 cri.go:89] found id: ""
	I1024 19:46:56.625958 1148549 ssh_runner.go:195] Run: sudo runc list -f json
	I1024 19:46:56.658770 1148549 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756","pid":1465,"status":"running","bundle":"/run/containers/storage/overlay-containers/a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756/userdata","rootfs":"/var/lib/containers/storage/overlay/0b7d20c22ce0ca0dea68351e073fabfdbdd004837cc4095b3a396757c583c162/merged","created":"2023-10-24T19:46:07.733461286Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:07.621977721Z","io.kubernetes.cri-o.Image":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-989906\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-989906_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":
\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0b7d20c22ce0ca0dea68351e073fabfdbdd004837cc4095b3a396757c583c162/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-989906_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f179865bc9f7d1038336febadda8ba4fdaec9d09f3d7c14b298e3687c5603a2a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f179865bc9f7d1038336febadda8ba4fdaec9d09f3d7c14b298e3687c5603a2a","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-989906_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/0
a800a2c\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","
io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-989906","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-10-24T19:46:03.531026867Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae","pid":2231,"status":"running","bundle":"/run/containers/storage/overlay-containers/a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae/userdata","rootfs":"/var/lib/containers/storage/overlay/ce8366f89fae87dcf49076b335302c67518d4b616ad4c21e63d8b0a6f64ad63d/merged","created":"2023-10-24T19:46:43.992070062Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"787a3e38","io.kubernetes.container.name":"co
redns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"787a3e38\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","
io.kubernetes.cri-o.ContainerID":"a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:43.957696184Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-s684d\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"15285ee9-beda-4c26-b142-d521a8fd9693\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-s684d_15285ee9-beda-4c26-b142-d521a8fd9693/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/ce8366f89fae
87dcf49076b335302c67518d4b616ad4c21e63d8b0a6f64ad63d/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-s684d_kube-system_15285ee9-beda-4c26-b142-d521a8fd9693_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0fa717c044f25beac4e6022b3e3a06d98266ac20d8b44c441ba7e506f034ba84/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0fa717c044f25beac4e6022b3e3a06d98266ac20d8b44c441ba7e506f034ba84","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-s684d_kube-system_15285ee9-beda-4c26-b142-d521a8fd9693_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/15285ee9-beda-4c26-b142-d521a8fd9693/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/li
b/kubelet/pods/15285ee9-beda-4c26-b142-d521a8fd9693/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/15285ee9-beda-4c26-b142-d521a8fd9693/containers/coredns/1bd441cd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/15285ee9-beda-4c26-b142-d521a8fd9693/volumes/kubernetes.io~secret/coredns-token-s5psz\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-s684d","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"15285ee9-beda-4c26-b142-d521a8fd9693","kubernetes.io/config.seen":"2023-10-24T19:46:43.590281666Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c","p
id":1494,"status":"running","bundle":"/run/containers/storage/overlay-containers/c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c/userdata","rootfs":"/var/lib/containers/storage/overlay/0eadb1cdb9a3447064f0916e00afcfeafd71a2caea98a1963bad066d240e956d/merged","created":"2023-10-24T19:46:07.729254347Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c8cf3612021c7fef779b711b930241776408303c4
8fb9e0d242b5b964a19c69c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:07.672252128Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-989906\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-989906_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0eadb1cdb9a3447064f0916e00afcfeafd71a2caea98a1963bad066d240e956d/mer
ged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-989906_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5d58da357b1e169e5926ebd29ed8c5fcd5c3d1d275b7c9ee28f34bff050e3b26/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d58da357b1e169e5926ebd29ed8c5fcd5c3d1d275b7c9ee28f34bff050e3b26","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-989906_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/cont
ainers/kube-scheduler/05bfab6e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-989906","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-10-24T19:46:03.534757869Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9","pid":1501,"status":"running","bundle":"/run/containers/storage/overlay-containers/d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9/userdata","rootfs":"/var/lib/containers/storage/overlay/e0bd20c0c2ba6bd30c8204ab75a6fe0cf6192bd0d52eee81f628fd1bc
dd6f8d9/merged","created":"2023-10-24T19:46:07.746422784Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:07.671579803Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-
o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-989906\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-989906_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e0bd20c0c2ba6bd30c8204ab75a6fe0cf6192bd0d52eee81f628fd1bcdd6f8d9/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-989906_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/
run/containers/storage/overlay-containers/b0f34ba30add0b16f5484e39a2a5a37fd9f06790b022d1ab674fa35645703441/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b0f34ba30add0b16f5484e39a2a5a37fd9f06790b022d1ab674fa35645703441","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-989906_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/bc28401e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128d
b5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volum
e/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-989906","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.seen":"2023-10-24T19:46:03.533083308Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3","pid":2281,"status":"running","bundle":"/run/containers/storage/overlay-containers/df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3/userdata","rootfs":"/var/lib/containers/storage/overlay/f0d313a7f2c7bbab4486010109938b4036e981c08cb17ddbc8a1912179dfbee2/merged","created":"2023-10-24T19:46:48.020264829Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"31cbd7f0","io.kubernetes.container.name":"sto
rage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"31cbd7f0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:47.968300625Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d283
19c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f0d313a7f2c7bbab4486010109938b4036e981c08cb17ddbc8a1912179dfbee2/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7c2a71b976099f535e36a5891b4c80054e5b714e3107d21d807f4335c86b158f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7c2a71b976099f535e36a5891b4c80054e5b714e3107d21d807f4335c86b158f","io.kubernet
es.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4/containers/storage-provisioner/2eaede30\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4/volumes/kubernetes.io~secret/storage-provisioner-token-mscsg\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-10-24T19:46
:45.589576013Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c","pid":1459,"status":"running","bundle":"/run/containers/storage/overlay-containers/e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c/userdata","rootfs":"/var/lib/containers/storage/overlay/369f3a53ccb77c916d4d5e3270ceeb867e152b9ab3f6de847bc58a51eab42aaa/merged","created":"2023-10-24T19:46:07.754466496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"978adfe8","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"978adfe8\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termina
tionMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:07.602618131Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-989906\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"daa964eaf73353821fadbd6a7a9e0eb3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-989906_daa964eaf73353821fadbd6a7a9e0eb3/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/369f3a53ccb77c916d4d5e3270ceeb867e152b9ab3f6de847bc58a51eab42aaa/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-989906_kube-system_daa964eaf73353821fadbd6a7a9e0eb3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4afc01bd403406d436e05908166c65f395b50a83aedd99df300e5774c5ebdb07/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4afc01bd403406d436e05908166c65f395b50a83aedd99df300e5774c5ebdb07","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-989906_kube-system_daa964eaf73353821fadbd6a7a9e0eb3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/daa964eaf73353821fadbd6a7a9e0eb3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_
path\":\"/var/lib/kubelet/pods/daa964eaf73353821fadbd6a7a9e0eb3/containers/etcd/63166a9f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-989906","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"daa964eaf73353821fadbd6a7a9e0eb3","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"daa964eaf73353821fadbd6a7a9e0eb3","kubernetes.io/config.seen":"2023-10-24T19:46:03.536175085Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68
d65f5f7fab865","pid":2008,"status":"running","bundle":"/run/containers/storage/overlay-containers/efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865/userdata","rootfs":"/var/lib/containers/storage/overlay/2410ca1f44d0e3c6814924b81b3e9b2150154bd56a955fd5498bb43890820023/merged","created":"2023-10-24T19:46:33.606941535Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"9cd9e0aa","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"9cd9e0aa\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"efa6e1f60f591b7123f48b59f9a6
a8ab192fa3e090606094a68d65f5f7fab865","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:33.564124537Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-tcvng\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"e1f70384-ced8-4a81-89d8-e4d8dc5519b6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-tcvng_e1f70384-ced8-4a81-89d8-e4d8dc5519b6/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2410ca1f44d0e3c6814924b81b3e9b2150154bd56a955fd5498bb43890820023/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy
_kube-proxy-tcvng_kube-system_e1f70384-ced8-4a81-89d8-e4d8dc5519b6_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6c1d10729efe6a4d79da3841ec6c14b0476b21b991cc6b5130a106e138af5f26/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6c1d10729efe6a4d79da3841ec6c14b0476b21b991cc6b5130a106e138af5f26","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-tcvng_kube-system_e1f70384-ced8-4a81-89d8-e4d8dc5519b6_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/e1f70384-ced8-4a81-89d8-e4d8dc5519b6/etc-hosts\",\"
readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/e1f70384-ced8-4a81-89d8-e4d8dc5519b6/containers/kube-proxy/775e2cc9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/e1f70384-ced8-4a81-89d8-e4d8dc5519b6/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/e1f70384-ced8-4a81-89d8-e4d8dc5519b6/volumes/kubernetes.io~secret/kube-proxy-token-hhwph\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-tcvng","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"e1f70384-ced8-4a81-89d8-e4d8dc5519b6","kubernetes.io/config.seen":"2023-10-24T19:46:33.190072
183Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093","pid":2120,"status":"running","bundle":"/run/containers/storage/overlay-containers/f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093/userdata","rootfs":"/var/lib/containers/storage/overlay/e762902cb35963591b586bb8992e8c6771f5e0753d35655a3eff12a64eb63564/merged","created":"2023-10-24T19:46:35.542659961Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"13fb913a","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"13fb913a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-10-24T19:46:35.488807577Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-qsxdg\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-qsxdg_1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e762902cb35963591b586bb8992e8c6771f5e0753d35655a3eff12a64eb63564/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-qsxdg_kube-system_1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/87d94f7ba6bdeaae984ebefb34f91f36074c20ae92bb502605931da3bbfd208c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"87d94f7ba6bdeaae984ebefb34f91f36074c20ae92bb502605931da3bbfd208c","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-qsxdg_kube-system_1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d/containers/kindnet-cni/eaf09238\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d/volumes/kubernetes.io~secret/kindnet-token-25f52\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-qsxdg","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1a50c
0d6-271a-4e41-b2d1-fd3f68c12d0d","kubernetes.io/config.seen":"2023-10-24T19:46:33.229280049Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1024 19:46:56.659354 1148549 cri.go:126] list returned 8 containers
	I1024 19:46:56.659369 1148549 cri.go:129] container: {ID:a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756 Status:running}
	I1024 19:46:56.659384 1148549 cri.go:135] skipping {a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756 running}: state = "running", want "paused"
	I1024 19:46:56.659398 1148549 cri.go:129] container: {ID:a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae Status:running}
	I1024 19:46:56.659405 1148549 cri.go:135] skipping {a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae running}: state = "running", want "paused"
	I1024 19:46:56.659416 1148549 cri.go:129] container: {ID:c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c Status:running}
	I1024 19:46:56.659423 1148549 cri.go:135] skipping {c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c running}: state = "running", want "paused"
	I1024 19:46:56.659434 1148549 cri.go:129] container: {ID:d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9 Status:running}
	I1024 19:46:56.659441 1148549 cri.go:135] skipping {d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9 running}: state = "running", want "paused"
	I1024 19:46:56.659452 1148549 cri.go:129] container: {ID:df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3 Status:running}
	I1024 19:46:56.659464 1148549 cri.go:135] skipping {df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3 running}: state = "running", want "paused"
	I1024 19:46:56.659471 1148549 cri.go:129] container: {ID:e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c Status:running}
	I1024 19:46:56.659480 1148549 cri.go:135] skipping {e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c running}: state = "running", want "paused"
	I1024 19:46:56.659489 1148549 cri.go:129] container: {ID:efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865 Status:running}
	I1024 19:46:56.659498 1148549 cri.go:135] skipping {efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865 running}: state = "running", want "paused"
	I1024 19:46:56.659508 1148549 cri.go:129] container: {ID:f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093 Status:running}
	I1024 19:46:56.659516 1148549 cri.go:135] skipping {f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093 running}: state = "running", want "paused"
	I1024 19:46:56.662542 1148549 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1024 19:46:56.664692 1148549 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:46:56.664711 1148549 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-989906"
	I1024 19:46:56.664719 1148549 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-989906"
	I1024 19:46:56.664751 1148549 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:56.665160 1148549 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:56.686301 1148549 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1024 19:46:56.688438 1148549 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1024 19:46:56.690448 1148549 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1024 19:46:56.692400 1148549 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:46:56.692420 1148549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1024 19:46:56.692485 1148549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:56.710517 1148549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:56.818900 1148549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1024 19:46:57.365944 1148549 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-989906"
	I1024 19:46:57.368346 1148549 out.go:177] * Verifying ingress addon...
	I1024 19:46:57.371155 1148549 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:57.371957 1148549 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:46:57.372536 1148549 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1024 19:46:57.400072 1148549 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1024 19:46:57.400146 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:57.404137 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:57.908751 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:58.408244 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:58.910353 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:59.408854 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:46:59.908760 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:00.408304 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:00.908898 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:01.408030 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:01.908452 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:02.408942 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:02.908372 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:03.408600 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:03.908907 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:04.408155 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:04.908602 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:05.409027 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:05.908253 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:06.409008 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:06.908496 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:07.408924 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:07.908469 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:08.408757 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:08.907965 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:09.408197 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:09.908792 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:10.407984 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:10.908088 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:11.408413 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:11.909069 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:12.408374 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:12.911846 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:13.408121 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:13.908411 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:14.408081 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:14.908510 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:15.409015 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:15.908087 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:16.408869 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:16.908406 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:17.408832 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:17.908031 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:18.408311 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:18.908992 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:19.408299 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:19.908573 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:20.409087 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:20.907908 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:21.407938 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:21.908450 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:22.408954 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:22.908318 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:23.408717 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:23.907964 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:24.408277 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:24.908093 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:25.408556 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:25.908370 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:26.408887 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:26.908788 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:27.408442 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:27.909676 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:28.407998 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:28.907873 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:29.408264 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:29.908690 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:30.407962 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:30.908321 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:31.408942 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:31.908376 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:32.408727 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:32.908181 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:33.408538 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:33.908941 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:34.408218 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:34.908581 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:35.408233 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:35.908692 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:36.408166 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:36.908878 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:37.408549 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:37.909200 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:38.408719 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:38.908289 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:39.408475 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:39.909660 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:40.408320 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:40.908826 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:41.408044 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:41.909071 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:42.408073 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:42.908485 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:43.408896 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:43.908311 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:44.410727 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:44.908056 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:45.408484 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:45.908833 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:46.408978 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:46.907854 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:47.408335 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:47.908610 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:48.408873 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:48.907849 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:49.407952 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:49.908333 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:50.408774 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:50.907917 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:51.408346 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:51.908812 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:52.408184 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:52.907983 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:53.408359 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:53.908701 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:54.408265 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:54.908078 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:55.408060 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:55.907980 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:56.408177 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:56.908715 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:57.407902 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:57.908083 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:58.408434 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:58.908723 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:59.408009 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:47:59.908838 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:00.408304 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:00.908952 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:01.408627 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:01.908350 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:02.408715 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:02.908160 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:03.408417 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:03.908737 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:04.408304 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:04.908755 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:05.408234 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:05.908536 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:06.409400 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:06.908146 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:07.408576 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:07.908872 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:08.407960 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:08.908116 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:09.407940 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:09.907969 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:10.408324 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:10.909103 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:11.408637 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:11.908906 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:12.407963 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:12.907974 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:13.408381 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:13.908671 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:14.408363 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:14.908718 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:15.408193 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:15.908480 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:16.408928 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:16.908105 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:17.408605 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:17.909006 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:18.409425 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:18.909201 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:19.408499 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:19.909210 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:20.408638 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:20.908963 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:21.409175 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:21.908841 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:22.407896 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:22.908001 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:23.407906 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:23.908173 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:24.407854 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:24.908121 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:25.408435 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:25.908771 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:26.408183 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:26.908765 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:27.408970 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:27.908090 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:28.408479 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:28.908805 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:29.407947 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:29.909512 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:30.408892 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:30.907857 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:31.407977 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:31.908557 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:32.409168 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:32.908127 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:33.408399 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:33.908666 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:34.408902 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:34.908004 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:35.407903 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:35.907957 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:36.408334 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:36.908992 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:37.408703 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:37.908133 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:38.408394 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:38.908753 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:39.408095 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:39.908425 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:40.408573 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:40.909159 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:41.407978 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:41.907991 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:42.408305 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:42.908751 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:43.408085 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:43.908272 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:44.408808 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:44.907967 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:45.408380 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:45.908698 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:46.408813 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:46.908274 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:47.408708 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:47.907982 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:48.408254 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:48.908398 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:49.408139 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:49.908189 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:50.408791 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:50.907927 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:51.408064 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:51.908925 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:52.408464 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:52.908923 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:53.408360 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:53.908806 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:54.408013 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:54.908075 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:55.408507 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:55.909259 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:56.409016 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:56.908733 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:57.408485 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:57.908456 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:58.408820 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:58.908648 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:59.408137 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:48:59.907935 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:00.408139 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:00.908368 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:01.408779 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:01.907986 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:02.408193 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:02.909008 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:03.407907 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:03.908271 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:04.408591 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:04.908915 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:05.407799 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:05.908560 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:06.408851 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:06.907905 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:07.408690 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:07.908999 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:08.408128 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:08.908137 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:09.407967 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:09.908182 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:10.407987 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:10.907852 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:11.407996 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:11.907817 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:12.407827 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:12.907832 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:13.408029 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:13.908172 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:14.408231 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:14.908585 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:15.408803 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:15.907828 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:16.408588 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:16.907991 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:17.408407 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:17.908686 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:18.408786 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:18.908016 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:19.408126 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:19.908843 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:20.407856 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:20.907876 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:21.408139 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:21.908839 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:22.407849 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:22.907888 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:23.407930 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:23.907915 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:24.408263 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:24.908686 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:25.407950 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:25.907859 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:26.411694 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:26.907948 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:27.408496 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:27.908670 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:28.407966 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:28.907989 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:29.407976 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:29.908196 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:30.408555 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:30.908904 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:31.408143 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:31.908153 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:32.407924 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:32.908108 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:33.408153 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:33.908180 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:34.408388 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:34.908539 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:35.409106 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:35.908185 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:36.408195 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:36.909023 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:37.408570 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:37.909060 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:38.407989 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:38.908152 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:39.408426 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:39.908951 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:40.407944 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:40.908156 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:41.407940 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:41.908208 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:42.408566 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:42.909597 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:43.409227 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:43.908250 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:44.408630 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:44.908879 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:45.407896 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:45.907890 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:46.408168 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:46.908741 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:47.408417 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:47.909013 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:48.408275 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:48.909402 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:49.408747 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:49.907959 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:50.408149 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:50.908155 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:51.408997 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:51.908381 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:52.408643 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:52.908924 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:53.408255 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:53.908236 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:54.408356 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:54.908714 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:55.408943 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:55.908337 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:56.408793 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:56.908377 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:57.408746 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:57.907894 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:58.408189 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:58.908675 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:59.409399 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:49:59.908910 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:00.409047 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:00.908662 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:01.408151 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:01.909186 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:02.408874 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:02.908169 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:03.408708 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:03.908799 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:04.408149 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:04.908590 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:05.409001 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:05.907931 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:06.408312 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:06.909237 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:07.408376 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:07.908636 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:08.407942 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:08.908434 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:09.408842 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:09.908157 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:10.407969 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:10.907832 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:11.408417 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:11.908393 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:12.409785 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:12.907953 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:13.407934 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:13.908314 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:14.408578 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:14.908826 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:15.408090 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:15.907955 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:16.408547 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:16.909063 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:17.408541 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:17.909105 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:18.408348 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:18.908323 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:19.408818 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:19.908103 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:20.408052 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:20.909106 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:21.408032 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:21.908422 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:22.408618 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:22.908671 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:23.407928 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:23.907741 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:24.408110 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:24.907816 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:25.408012 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:25.908834 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:26.408119 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:26.908451 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:27.408772 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:27.908075 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:28.407890 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:28.907960 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:29.408095 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:29.908044 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:30.407995 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:30.908092 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:31.408134 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:31.908602 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:32.408860 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:32.908040 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:33.408324 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:33.908872 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:34.408256 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:34.908169 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:35.408463 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:35.908774 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:36.408080 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:36.908600 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:37.408876 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:37.908182 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:38.408559 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:38.909141 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:39.408543 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:39.908897 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:40.408079 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:40.907910 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:41.408116 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:41.908782 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:42.408091 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:42.908404 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:43.408881 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:43.908032 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:44.408173 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:44.907950 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:45.408174 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:45.908051 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:46.409165 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:46.908591 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:47.408479 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:47.908609 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:48.408800 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:48.907940 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:49.408163 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:49.908261 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:50.408651 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:50.908659 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:51.408131 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:51.908362 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:52.410204 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:52.908462 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:53.408608 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:53.908867 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:54.407843 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:54.907992 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:55.408005 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:55.908139 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:56.408491 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:56.908048 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:57.408930 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:57.908034 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:58.408264 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:58.908820 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:59.407908 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:50:59.908212 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:00.408486 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:00.908960 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:01.409811 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:01.908289 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:02.408583 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:02.908925 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:03.408107 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:03.907949 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:04.408262 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:04.907942 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:05.408111 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:05.908233 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:06.408311 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:06.908859 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:07.408409 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:07.908824 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:08.408377 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:08.908724 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:09.408180 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:09.908662 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:10.408001 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:10.908190 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:11.408479 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:11.908940 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:12.408285 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:12.908413 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:13.408736 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:13.907795 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:14.407995 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:14.908304 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:15.408748 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:15.907884 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:16.408404 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:16.909140 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:17.408528 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:17.908750 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:18.408384 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:18.908697 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:19.408823 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:19.908440 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:20.408697 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:20.907962 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:21.408362 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:21.909069 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:22.408674 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:22.908284 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:23.409281 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:23.908141 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:24.408313 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:24.908609 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:25.408027 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:25.908173 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:26.408428 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:26.909025 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:27.408610 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:27.908954 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:28.408284 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:28.908183 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:29.408679 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:29.908172 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:30.408832 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:30.907955 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:31.408220 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:31.908781 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:32.408103 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:32.908522 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:33.408751 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:33.908105 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:34.408199 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:34.908700 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:35.407871 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:35.908281 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:36.408602 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:36.908147 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:37.408580 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:37.908901 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:38.408408 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:38.908797 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:39.408026 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:39.908549 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:40.408780 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:40.907788 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:41.407881 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:41.907883 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:42.408040 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:42.908296 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:43.408618 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:43.908542 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:44.408738 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:44.908298 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:45.408991 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:45.909439 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:46.409016 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:46.908643 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:47.408659 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:47.908106 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:48.408626 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:48.908974 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:49.408038 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:49.908398 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:50.408802 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:50.908237 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:51.408551 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:51.908168 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:52.408497 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:52.908760 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:53.407988 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:53.907858 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:54.408101 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:54.908034 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:55.408360 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:55.908817 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:56.408911 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:56.907974 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:57.408719 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:57.908825 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:58.408023 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:58.908006 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:59.408133 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:51:59.908698 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:00.408206 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:00.908608 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:01.408064 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:01.908378 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:02.408930 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:02.908139 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:03.407972 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:03.908288 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:04.408211 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:04.908479 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:05.408631 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:05.908855 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:06.409384 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:06.908658 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:07.408623 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:07.909287 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:08.408209 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:08.908520 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:09.408656 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:09.907934 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:10.408169 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:10.908691 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:11.407929 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:11.908373 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:12.408640 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:12.908046 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:13.407906 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:13.907913 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:14.408079 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:14.912033 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:15.408110 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:15.908581 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:16.408029 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:16.908579 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:17.408734 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:17.908242 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:18.408727 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:18.907963 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:19.408177 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:19.909041 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:20.408244 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:20.907862 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:21.408149 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:21.908352 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:22.408183 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:22.908259 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:23.409012 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:23.907952 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:24.408126 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:24.908467 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:25.408685 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:25.908417 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:26.408823 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:26.908185 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:27.408852 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:27.908876 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:28.408094 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:28.908065 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:29.408568 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:29.908885 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:30.408240 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:30.908957 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:31.418068 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:31.908025 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:32.408301 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:32.908887 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:33.407854 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:33.908207 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:34.408447 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:34.908713 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:35.409286 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:35.907924 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:36.408296 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:36.908641 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:37.408806 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:37.908088 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:38.408801 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:38.908258 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:39.409132 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:39.907929 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:40.408297 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:40.908109 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:41.408131 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:41.909123 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:42.408640 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:42.908091 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:43.408231 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:43.908584 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:44.408699 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:44.907917 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:45.407928 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:45.907919 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:46.408395 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:46.909141 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:47.408623 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:47.908997 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:48.407821 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:48.908299 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:49.409461 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:49.908981 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:50.407840 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:50.907946 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:51.408425 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:51.909087 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:52.407896 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:52.908083 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:53.408043 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:53.908136 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:54.408306 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:54.908548 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:55.408856 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:55.908031 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:56.408234 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:56.908755 1148549 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1024 19:52:57.373488 1148549 kapi.go:107] duration metric: took 6m0.000939186s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1024 19:52:57.375868 1148549 out.go:177] 
	W1024 19:52:57.377779 1148549 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1024 19:52:57.377799 1148549 out.go:239] * 
	* 
	W1024 19:52:57.384288 1148549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:52:57.386313 1148549 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-989906
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-989906:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26",
	        "Created": "2023-10-24T19:45:41.323562437Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1146012,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:45:41.646832689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/hosts",
	        "LogPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26-json.log",
	        "Name": "/ingress-addon-legacy-989906",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-989906:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-989906",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-989906",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-989906/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-989906",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-989906",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-989906",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec763f8756e976b12ab743b6414c184b0e4c92d9c94acfe37ba7372650a18484",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec763f8756e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-989906": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7b10e689dc76",
	                        "ingress-addon-legacy-989906"
	                    ],
	                    "NetworkID": "ab20f783397596b6b2b42c66fe9839120e2f1a6a22433e710e060b2d2df080fb",
	                    "EndpointID": "41513c7ac0b0fd3a0f18525d51134f59c305d56a782425a52e2f615aea7627cb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-989906 -n ingress-addon-legacy-989906
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-989906 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-989906 logs -n 25: (1.425295405s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh sudo cat                                         | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | /etc/test/nested/copy/1117634/hosts                                    |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --memory                                                     |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                             |         |         |                     |                     |
	|                | --driver=docker                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --memory                                                     |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                             |         |         |                     |                     |
	|                | --driver=docker                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --alsologtostderr                                            |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                     | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -p functional-419430                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh pgrep                                            | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-419430 image build -t                                       | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | localhost/my-image:functional-419430                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-419430 image ls                                             | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:45 UTC | 24 Oct 23 19:45 UTC |
	| start          | -p ingress-addon-legacy-989906                                         | ingress-addon-legacy-989906 | jenkins | v1.31.2 | 24 Oct 23 19:45 UTC | 24 Oct 23 19:46 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-989906                                            | ingress-addon-legacy-989906 | jenkins | v1.31.2 | 24 Oct 23 19:46 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:45:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:45:16.720657 1145561 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:45:16.720861 1145561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:45:16.720876 1145561 out.go:309] Setting ErrFile to fd 2...
	I1024 19:45:16.720883 1145561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:45:16.721227 1145561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:45:16.721862 1145561 out.go:303] Setting JSON to false
	I1024 19:45:16.722777 1145561 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34066,"bootTime":1698142651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:45:16.722857 1145561 start.go:138] virtualization:  
	I1024 19:45:16.725929 1145561 out.go:177] * [ingress-addon-legacy-989906] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:45:16.728980 1145561 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:45:16.729138 1145561 notify.go:220] Checking for updates...
	I1024 19:45:16.733355 1145561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:45:16.735713 1145561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:45:16.737820 1145561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:45:16.740181 1145561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:45:16.742425 1145561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:45:16.744791 1145561 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:45:16.769699 1145561 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:45:16.769865 1145561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:45:16.855969 1145561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 19:45:16.845921506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:45:16.856079 1145561 docker.go:295] overlay module found
	I1024 19:45:16.860454 1145561 out.go:177] * Using the docker driver based on user configuration
	I1024 19:45:16.862609 1145561 start.go:298] selected driver: docker
	I1024 19:45:16.862626 1145561 start.go:902] validating driver "docker" against <nil>
	I1024 19:45:16.862693 1145561 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:45:16.863316 1145561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:45:16.934791 1145561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 19:45:16.924184492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:45:16.934946 1145561 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:45:16.935178 1145561 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:45:16.937321 1145561 out.go:177] * Using Docker driver with root privileges
	I1024 19:45:16.939553 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:45:16.939571 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:45:16.939587 1145561 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:45:16.939597 1145561 start_flags.go:323] config:
	{Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:45:16.942767 1145561 out.go:177] * Starting control plane node ingress-addon-legacy-989906 in cluster ingress-addon-legacy-989906
	I1024 19:45:16.945086 1145561 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:45:16.947576 1145561 out.go:177] * Pulling base image ...
	I1024 19:45:16.950006 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:16.950106 1145561 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:45:16.967575 1145561 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:45:16.967600 1145561 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:45:17.033828 1145561 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1024 19:45:17.033865 1145561 cache.go:57] Caching tarball of preloaded images
	I1024 19:45:17.034044 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:17.037482 1145561 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1024 19:45:17.041220 1145561 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:17.158833 1145561 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1024 19:45:33.344124 1145561 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:33.344224 1145561 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:34.602993 1145561 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1024 19:45:34.603380 1145561 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json ...
	I1024 19:45:34.603416 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json: {Name:mk8d09da22c56b346f06e446c8fe836fdf8fc271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:34.603625 1145561 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:45:34.603685 1145561 start.go:365] acquiring machines lock for ingress-addon-legacy-989906: {Name:mk8d4eab24c712234aec5d6857de53c99eec40c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:45:34.603762 1145561 start.go:369] acquired machines lock for "ingress-addon-legacy-989906" in 61.342µs
	I1024 19:45:34.603788 1145561 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:45:34.603888 1145561 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:45:34.606696 1145561 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1024 19:45:34.607039 1145561 start.go:159] libmachine.API.Create for "ingress-addon-legacy-989906" (driver="docker")
	I1024 19:45:34.607070 1145561 client.go:168] LocalClient.Create starting
	I1024 19:45:34.607142 1145561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 19:45:34.607184 1145561 main.go:141] libmachine: Decoding PEM data...
	I1024 19:45:34.607206 1145561 main.go:141] libmachine: Parsing certificate...
	I1024 19:45:34.607305 1145561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 19:45:34.607331 1145561 main.go:141] libmachine: Decoding PEM data...
	I1024 19:45:34.607355 1145561 main.go:141] libmachine: Parsing certificate...
	I1024 19:45:34.607758 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:45:34.626013 1145561 cli_runner.go:211] docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:45:34.626111 1145561 network_create.go:281] running [docker network inspect ingress-addon-legacy-989906] to gather additional debugging logs...
	I1024 19:45:34.626134 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906
	W1024 19:45:34.643204 1145561 cli_runner.go:211] docker network inspect ingress-addon-legacy-989906 returned with exit code 1
	I1024 19:45:34.643237 1145561 network_create.go:284] error running [docker network inspect ingress-addon-legacy-989906]: docker network inspect ingress-addon-legacy-989906: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-989906 not found
	I1024 19:45:34.643253 1145561 network_create.go:286] output of [docker network inspect ingress-addon-legacy-989906]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-989906 not found
	
	** /stderr **
	I1024 19:45:34.643364 1145561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:45:34.665093 1145561 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006e1c40}
	I1024 19:45:34.665130 1145561 network_create.go:124] attempt to create docker network ingress-addon-legacy-989906 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:45:34.665194 1145561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 ingress-addon-legacy-989906
	I1024 19:45:34.737197 1145561 network_create.go:108] docker network ingress-addon-legacy-989906 192.168.49.0/24 created
	I1024 19:45:34.737230 1145561 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-989906" container
	I1024 19:45:34.737312 1145561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:45:34.754218 1145561 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-989906 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:45:34.773288 1145561 oci.go:103] Successfully created a docker volume ingress-addon-legacy-989906
	I1024 19:45:34.773370 1145561 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-989906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --entrypoint /usr/bin/test -v ingress-addon-legacy-989906:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:45:36.316739 1145561 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-989906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --entrypoint /usr/bin/test -v ingress-addon-legacy-989906:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.543298122s)
	I1024 19:45:36.316767 1145561 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-989906
	I1024 19:45:36.316793 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:36.316813 1145561 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:45:36.316909 1145561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-989906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:45:41.236328 1145561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-989906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.919358735s)
	I1024 19:45:41.236361 1145561 kic.go:200] duration metric: took 4.919545 seconds to extract preloaded images to volume
	W1024 19:45:41.236503 1145561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:45:41.236625 1145561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:45:41.307096 1145561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-989906 --name ingress-addon-legacy-989906 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --network ingress-addon-legacy-989906 --ip 192.168.49.2 --volume ingress-addon-legacy-989906:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:45:41.655653 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Running}}
	I1024 19:45:41.683958 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:41.708311 1145561 cli_runner.go:164] Run: docker exec ingress-addon-legacy-989906 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:45:41.798641 1145561 oci.go:144] the created container "ingress-addon-legacy-989906" has a running status.
	I1024 19:45:41.798671 1145561 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa...
	I1024 19:45:42.073358 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 19:45:42.073461 1145561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:45:42.110141 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:42.148426 1145561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:45:42.148445 1145561 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-989906 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:45:42.253345 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:42.280808 1145561 machine.go:88] provisioning docker machine ...
	I1024 19:45:42.280840 1145561 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-989906"
	I1024 19:45:42.280909 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:42.306551 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:42.307027 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:42.307042 1145561 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-989906 && echo "ingress-addon-legacy-989906" | sudo tee /etc/hostname
	I1024 19:45:42.307672 1145561 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1024 19:45:45.460134 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-989906
	
	I1024 19:45:45.460219 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:45.479965 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:45.480374 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:45.480398 1145561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-989906' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-989906/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-989906' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:45:45.618651 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:45:45.618682 1145561 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 19:45:45.618717 1145561 ubuntu.go:177] setting up certificates
	I1024 19:45:45.618726 1145561 provision.go:83] configureAuth start
	I1024 19:45:45.618787 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:45.636613 1145561 provision.go:138] copyHostCerts
	I1024 19:45:45.636660 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 19:45:45.636689 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 19:45:45.636700 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 19:45:45.636778 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 19:45:45.636861 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 19:45:45.636885 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 19:45:45.636893 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 19:45:45.636919 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 19:45:45.636960 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 19:45:45.636981 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 19:45:45.636988 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 19:45:45.637012 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 19:45:45.637059 1145561 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-989906 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-989906]
	I1024 19:45:45.983782 1145561 provision.go:172] copyRemoteCerts
	I1024 19:45:45.983851 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:45:45.983894 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.008964 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.108955 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:45:46.109013 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:45:46.137220 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:45:46.137288 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 19:45:46.165791 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:45:46.165857 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:45:46.193331 1145561 provision.go:86] duration metric: configureAuth took 574.587724ms
	I1024 19:45:46.193357 1145561 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:45:46.193548 1145561 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:45:46.193664 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.212236 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:46.212677 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:46.212700 1145561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:45:46.496361 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:45:46.496383 1145561 machine.go:91] provisioned docker machine in 4.21555483s
	I1024 19:45:46.496393 1145561 client.go:171] LocalClient.Create took 11.889317728s
	I1024 19:45:46.496414 1145561 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-989906" took 11.889367549s
	I1024 19:45:46.496425 1145561 start.go:300] post-start starting for "ingress-addon-legacy-989906" (driver="docker")
	I1024 19:45:46.496435 1145561 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:45:46.496507 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:45:46.496565 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.515970 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.616346 1145561 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:45:46.620314 1145561 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:45:46.620391 1145561 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:45:46.620410 1145561 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:45:46.620417 1145561 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:45:46.620430 1145561 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 19:45:46.620503 1145561 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 19:45:46.620594 1145561 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 19:45:46.620605 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /etc/ssl/certs/11176342.pem
	I1024 19:45:46.620715 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:45:46.630640 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:45:46.657719 1145561 start.go:303] post-start completed in 161.27768ms
	I1024 19:45:46.658095 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:46.676059 1145561 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json ...
	I1024 19:45:46.676327 1145561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:45:46.676377 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.694181 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.787492 1145561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:45:46.793034 1145561 start.go:128] duration metric: createHost completed in 12.189125393s
	I1024 19:45:46.793059 1145561 start.go:83] releasing machines lock for "ingress-addon-legacy-989906", held for 12.18928417s
	I1024 19:45:46.793152 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:46.811259 1145561 ssh_runner.go:195] Run: cat /version.json
	I1024 19:45:46.811316 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.811554 1145561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:45:46.811617 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.832859 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.840696 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.926099 1145561 ssh_runner.go:195] Run: systemctl --version
	I1024 19:45:47.063105 1145561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:45:47.209602 1145561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:45:47.215031 1145561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:45:47.241154 1145561 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:45:47.241257 1145561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:45:47.278613 1145561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:45:47.278634 1145561 start.go:472] detecting cgroup driver to use...
	I1024 19:45:47.278693 1145561 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:45:47.278767 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:45:47.297632 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:45:47.311228 1145561 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:45:47.311297 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:45:47.327106 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:45:47.343781 1145561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:45:47.444787 1145561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:45:47.554033 1145561 docker.go:214] disabling docker service ...
	I1024 19:45:47.554139 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:45:47.575780 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:45:47.589988 1145561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:45:47.693945 1145561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:45:47.805962 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:45:47.819422 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:45:47.839206 1145561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:45:47.839271 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.851290 1145561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:45:47.851361 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.863486 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.875288 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.886752 1145561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:45:47.897673 1145561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:45:47.908072 1145561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:45:47.918290 1145561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:45:48.005278 1145561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:45:48.134108 1145561 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:45:48.134235 1145561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:45:48.139234 1145561 start.go:540] Will wait 60s for crictl version
	I1024 19:45:48.139332 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:48.143894 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:45:48.188700 1145561 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:45:48.188804 1145561 ssh_runner.go:195] Run: crio --version
	I1024 19:45:48.233531 1145561 ssh_runner.go:195] Run: crio --version
	I1024 19:45:48.277152 1145561 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1024 19:45:48.279249 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:45:48.296922 1145561 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:45:48.301639 1145561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:45:48.314706 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:48.314776 1145561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:45:48.372347 1145561 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:45:48.372418 1145561 ssh_runner.go:195] Run: which lz4
	I1024 19:45:48.377023 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:45:48.377156 1145561 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:45:48.381726 1145561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:45:48.381782 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1024 19:45:50.474362 1145561 crio.go:444] Took 2.097254 seconds to copy over tarball
	I1024 19:45:50.474429 1145561 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:45:53.136608 1145561 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.662154995s)
	I1024 19:45:53.136685 1145561 crio.go:451] Took 2.662300 seconds to extract the tarball
	I1024 19:45:53.136709 1145561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:45:53.315387 1145561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:45:53.365830 1145561 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:45:53.365859 1145561 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:45:53.365909 1145561 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:53.366123 1145561 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.366196 1145561 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.366278 1145561 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.366358 1145561 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.366432 1145561 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1024 19:45:53.366503 1145561 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.366571 1145561 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.367488 1145561 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1024 19:45:53.367948 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.368145 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.368307 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.368435 1145561 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.368573 1145561 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:53.368812 1145561 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.369924 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W1024 19:45:53.676244 1145561 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.676498 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1024 19:45:53.699165 1145561 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.699407 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1024 19:45:53.704193 1145561 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.704430 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1024 19:45:53.737104 1145561 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.737363 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.739127 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1024 19:45:53.748276 1145561 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.748620 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.757956 1145561 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1024 19:45:53.758094 1145561 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.758184 1145561 ssh_runner.go:195] Run: which crictl
	W1024 19:45:53.766827 1145561 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.767121 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.884078 1145561 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1024 19:45:53.884321 1145561 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.884395 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884185 1145561 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1024 19:45:53.884505 1145561 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.884541 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884241 1145561 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1024 19:45:53.884618 1145561 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.884684 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884292 1145561 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1024 19:45:53.884763 1145561 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1024 19:45:53.884812 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.894435 1145561 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1024 19:45:53.894525 1145561 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.894602 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.894721 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.916428 1145561 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1024 19:45:53.916519 1145561 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.916607 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.918123 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.918254 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1024 19:45:53.918343 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.918652 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.980993 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1024 19:45:53.981070 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.981134 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	W1024 19:45:54.090868 1145561 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:54.091117 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.100474 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1024 19:45:54.100625 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1024 19:45:54.100701 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1024 19:45:54.100772 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1024 19:45:54.100872 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1024 19:45:54.100950 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1024 19:45:54.243851 1145561 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1024 19:45:54.243907 1145561 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.243965 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:54.248278 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.312478 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 19:45:54.312574 1145561 cache_images.go:92] LoadImages completed in 946.699962ms
	W1024 19:45:54.312655 1145561 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1024 19:45:54.312743 1145561 ssh_runner.go:195] Run: crio config
	I1024 19:45:54.372541 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:45:54.372564 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:45:54.372595 1145561 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:45:54.372619 1145561 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-989906 NodeName:ingress-addon-legacy-989906 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 19:45:54.372763 1145561 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-989906"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:45:54.372844 1145561 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-989906 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:45:54.372922 1145561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1024 19:45:54.383408 1145561 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:45:54.383480 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:45:54.393944 1145561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1024 19:45:54.415237 1145561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1024 19:45:54.437261 1145561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1024 19:45:54.458547 1145561 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:45:54.462717 1145561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:45:54.476274 1145561 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906 for IP: 192.168.49.2
	I1024 19:45:54.476309 1145561 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.476469 1145561 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 19:45:54.476515 1145561 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 19:45:54.476565 1145561 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key
	I1024 19:45:54.476580 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt with IP's: []
	I1024 19:45:54.851770 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt ...
	I1024 19:45:54.851803 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: {Name:mke51ccdea6d497cc04aa9302cd9e407423a2605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.852030 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key ...
	I1024 19:45:54.852045 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key: {Name:mk73d3af778ef013d579b8c7642b297d2f6d3187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.852144 1145561 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2
	I1024 19:45:54.852162 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:45:55.428504 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 ...
	I1024 19:45:55.428537 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2: {Name:mkb11342dc132c9193a45c52d9d5f1361f2e75a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.428727 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2 ...
	I1024 19:45:55.428740 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2: {Name:mk499274b62d050f26f22ab6c93f6503aa0b4c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.428826 1145561 certs.go:337] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt
	I1024 19:45:55.428907 1145561 certs.go:341] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key
	I1024 19:45:55.428964 1145561 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key
	I1024 19:45:55.428980 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt with IP's: []
	I1024 19:45:55.611544 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt ...
	I1024 19:45:55.611576 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt: {Name:mk0f4b8a40022dbc1abe2d378501c36f4803e0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.611765 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key ...
	I1024 19:45:55.611778 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key: {Name:mk79e709796ffc3e86f935d3971a5ee1618306ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.611864 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:45:55.611885 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:45:55.611898 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:45:55.611913 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:45:55.611927 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:45:55.611940 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:45:55.611960 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:45:55.611974 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:45:55.612025 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem (1338 bytes)
	W1024 19:45:55.612061 1145561 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634_empty.pem, impossibly tiny 0 bytes
	I1024 19:45:55.612074 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:45:55.612100 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:45:55.612127 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:45:55.612157 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 19:45:55.612209 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:45:55.612240 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.612255 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:55.612266 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem -> /usr/share/ca-certificates/1117634.pem
	I1024 19:45:55.612851 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:45:55.641523 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:45:55.670346 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:45:55.698761 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:45:55.726959 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:45:55.754846 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:45:55.782763 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:45:55.810992 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:45:55.839053 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /usr/share/ca-certificates/11176342.pem (1708 bytes)
	I1024 19:45:55.866971 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:45:55.894266 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem --> /usr/share/ca-certificates/1117634.pem (1338 bytes)
	I1024 19:45:55.921997 1145561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:45:55.942775 1145561 ssh_runner.go:195] Run: openssl version
	I1024 19:45:55.949730 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176342.pem && ln -fs /usr/share/ca-certificates/11176342.pem /etc/ssl/certs/11176342.pem"
	I1024 19:45:55.961158 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.965868 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.965985 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.974471 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11176342.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:45:55.985845 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:45:55.996959 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.002035 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.002115 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.011228 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:45:56.023152 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117634.pem && ln -fs /usr/share/ca-certificates/1117634.pem /etc/ssl/certs/1117634.pem"
	I1024 19:45:56.034907 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.039720 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.039825 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.048502 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117634.pem /etc/ssl/certs/51391683.0"
	I1024 19:45:56.060166 1145561 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:45:56.064544 1145561 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:45:56.064606 1145561 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:45:56.064681 1145561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:45:56.064740 1145561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:45:56.106791 1145561 cri.go:89] found id: ""
	I1024 19:45:56.106863 1145561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:45:56.118347 1145561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:45:56.129409 1145561 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:45:56.129474 1145561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:45:56.140213 1145561 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:45:56.140256 1145561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:45:56.196998 1145561 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1024 19:45:56.197380 1145561 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:45:56.246414 1145561 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:45:56.246506 1145561 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1024 19:45:56.246554 1145561 kubeadm.go:322] OS: Linux
	I1024 19:45:56.246599 1145561 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:45:56.246650 1145561 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:45:56.246708 1145561 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:45:56.246758 1145561 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:45:56.246806 1145561 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:45:56.246855 1145561 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:45:56.340653 1145561 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:45:56.340837 1145561 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:45:56.340980 1145561 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:45:56.581821 1145561 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:45:56.583447 1145561 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:45:56.583522 1145561 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:45:56.686213 1145561 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:45:56.691516 1145561 out.go:204]   - Generating certificates and keys ...
	I1024 19:45:56.691608 1145561 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:45:56.691677 1145561 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:45:57.073771 1145561 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:45:57.325473 1145561 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:45:57.895675 1145561 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:45:58.203876 1145561 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:45:58.836869 1145561 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:45:58.837278 1145561 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-989906 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:45:59.839841 1145561 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:45:59.840037 1145561 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-989906 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:46:00.500172 1145561 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:46:00.632610 1145561 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:46:01.189864 1145561 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:46:01.190405 1145561 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:46:01.513004 1145561 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:46:03.139968 1145561 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:46:03.322051 1145561 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:46:03.508173 1145561 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:46:03.508810 1145561 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:46:03.511088 1145561 out.go:204]   - Booting up control plane ...
	I1024 19:46:03.511210 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:46:03.519336 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:46:03.521266 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:46:03.522559 1145561 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:46:03.525451 1145561 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:46:16.027962 1145561 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502449 seconds
	I1024 19:46:16.028077 1145561 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:46:16.043547 1145561 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:46:16.564122 1145561 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:46:16.564263 1145561 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-989906 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 19:46:17.073109 1145561 kubeadm.go:322] [bootstrap-token] Using token: 2outf5.n916kh4lzbdmpqo8
	I1024 19:46:17.075832 1145561 out.go:204]   - Configuring RBAC rules ...
	I1024 19:46:17.075950 1145561 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:46:17.079118 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:46:17.087051 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:46:17.090030 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:46:17.092840 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:46:17.095373 1145561 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:46:17.106816 1145561 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:46:17.415673 1145561 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:46:17.495799 1145561 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:46:17.497729 1145561 kubeadm.go:322] 
	I1024 19:46:17.497816 1145561 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:46:17.497830 1145561 kubeadm.go:322] 
	I1024 19:46:17.497917 1145561 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:46:17.497927 1145561 kubeadm.go:322] 
	I1024 19:46:17.497952 1145561 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:46:17.498012 1145561 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:46:17.498065 1145561 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:46:17.498077 1145561 kubeadm.go:322] 
	I1024 19:46:17.498130 1145561 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:46:17.498203 1145561 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:46:17.498270 1145561 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:46:17.498278 1145561 kubeadm.go:322] 
	I1024 19:46:17.498357 1145561 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:46:17.498433 1145561 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:46:17.498443 1145561 kubeadm.go:322] 
	I1024 19:46:17.498525 1145561 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2outf5.n916kh4lzbdmpqo8 \
	I1024 19:46:17.498628 1145561 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 19:46:17.498653 1145561 kubeadm.go:322]     --control-plane 
	I1024 19:46:17.498661 1145561 kubeadm.go:322] 
	I1024 19:46:17.498741 1145561 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:46:17.498752 1145561 kubeadm.go:322] 
	I1024 19:46:17.498834 1145561 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2outf5.n916kh4lzbdmpqo8 \
	I1024 19:46:17.498942 1145561 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 19:46:17.501874 1145561 kubeadm.go:322] W1024 19:45:56.196096    1223 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1024 19:46:17.502160 1145561 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 19:46:17.502379 1145561 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:46:17.502514 1145561 kubeadm.go:322] W1024 19:46:03.519583    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:46:17.502649 1145561 kubeadm.go:322] W1024 19:46:03.521144    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:46:17.502661 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:46:17.502669 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:46:17.505170 1145561 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:46:17.507197 1145561 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:46:17.512795 1145561 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1024 19:46:17.512821 1145561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:46:17.537117 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:46:18.006525 1145561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:46:18.006680 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.006779 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=ingress-addon-legacy-989906 minikube.k8s.io/updated_at=2023_10_24T19_46_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.027367 1145561 ops.go:34] apiserver oom_adj: -16
	I1024 19:46:18.162188 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.253679 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.845656 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:19.345358 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:19.845591 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:20.345891 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:20.845163 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:21.345991 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:21.845959 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:22.345252 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:22.845200 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:23.345998 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:23.845423 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:24.345637 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:24.845261 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:25.345762 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:25.845500 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:26.346161 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:26.845517 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:27.345548 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:27.845447 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:28.345956 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:28.845142 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:29.345169 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:29.846117 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:30.346027 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:30.845581 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:31.345307 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:31.845160 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:32.345407 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:32.500882 1145561 kubeadm.go:1081] duration metric: took 14.494248429s to wait for elevateKubeSystemPrivileges.
	I1024 19:46:32.500914 1145561 kubeadm.go:406] StartCluster complete in 36.436320737s
	I1024 19:46:32.500932 1145561 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:46:32.500998 1145561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:46:32.501765 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:46:32.502502 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.502831 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:46:32.503090 1145561 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:46:32.503256 1145561 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:46:32.503329 1145561 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-989906"
	I1024 19:46:32.503346 1145561 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-989906"
	I1024 19:46:32.503404 1145561 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:32.503868 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.504308 1145561 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-989906"
	I1024 19:46:32.504329 1145561 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-989906"
	I1024 19:46:32.504585 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.505609 1145561 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:46:32.546692 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.546976 1145561 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-989906"
	I1024 19:46:32.547005 1145561 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:32.547456 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.559549 1145561 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:46:32.567381 1145561 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:46:32.567408 1145561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:46:32.567471 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:32.577236 1145561 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-989906" context rescaled to 1 replicas
	I1024 19:46:32.577275 1145561 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:46:32.579112 1145561 out.go:177] * Verifying Kubernetes components...
	I1024 19:46:32.581686 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:46:32.596317 1145561 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:46:32.596337 1145561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:46:32.596398 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:32.611116 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:32.643420 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:32.765053 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:46:32.765792 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.766075 1145561 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-989906" to be "Ready" ...
	I1024 19:46:32.770985 1145561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:46:32.857891 1145561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:46:33.255390 1145561 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:46:33.345619 1145561 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:46:33.347744 1145561 addons.go:502] enable addons completed in 844.478116ms: enabled=[storage-provisioner default-storageclass]
	I1024 19:46:34.996297 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:37.495645 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:39.995467 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:40.995252 1145561 node_ready.go:49] node "ingress-addon-legacy-989906" has status "Ready":"True"
	I1024 19:46:40.995281 1145561 node_ready.go:38] duration metric: took 8.229186421s waiting for node "ingress-addon-legacy-989906" to be "Ready" ...
	I1024 19:46:40.995292 1145561 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:46:41.002885 1145561 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-s684d" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:43.012046 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-24 19:46:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1024 19:46:45.017718 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:47.514912 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:50.014512 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:52.514688 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:55.015069 1145561 pod_ready.go:92] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.015096 1145561 pod_ready.go:81] duration metric: took 14.012103217s waiting for pod "coredns-66bff467f8-s684d" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.015108 1145561 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.020395 1145561 pod_ready.go:92] pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.020425 1145561 pod_ready.go:81] duration metric: took 5.308414ms waiting for pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.020441 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.026289 1145561 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.026316 1145561 pod_ready.go:81] duration metric: took 5.866508ms waiting for pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.026330 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.031915 1145561 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.031942 1145561 pod_ready.go:81] duration metric: took 5.603289ms waiting for pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.031956 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tcvng" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.036764 1145561 pod_ready.go:92] pod "kube-proxy-tcvng" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.036793 1145561 pod_ready.go:81] duration metric: took 4.811046ms waiting for pod "kube-proxy-tcvng" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.036806 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.210165 1145561 request.go:629] Waited for 173.266799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-989906
	I1024 19:46:55.410361 1145561 request.go:629] Waited for 197.379868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-989906
	I1024 19:46:55.413072 1145561 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.413097 1145561 pod_ready.go:81] duration metric: took 376.28353ms waiting for pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.413110 1145561 pod_ready.go:38] duration metric: took 14.417800427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:46:55.413130 1145561 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:46:55.413187 1145561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:46:55.426479 1145561 api_server.go:72] duration metric: took 22.849159014s to wait for apiserver process to appear ...
	I1024 19:46:55.426504 1145561 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:46:55.426520 1145561 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:46:55.435427 1145561 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:46:55.436267 1145561 api_server.go:141] control plane version: v1.18.20
	I1024 19:46:55.436293 1145561 api_server.go:131] duration metric: took 9.781164ms to wait for apiserver health ...
	I1024 19:46:55.436302 1145561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:46:55.609674 1145561 request.go:629] Waited for 173.287402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:46:55.615527 1145561 system_pods.go:59] 8 kube-system pods found
	I1024 19:46:55.615565 1145561 system_pods.go:61] "coredns-66bff467f8-s684d" [15285ee9-beda-4c26-b142-d521a8fd9693] Running
	I1024 19:46:55.615573 1145561 system_pods.go:61] "etcd-ingress-addon-legacy-989906" [338ae618-49b2-4df5-9fab-0fb48ef3a8cb] Running
	I1024 19:46:55.615606 1145561 system_pods.go:61] "kindnet-qsxdg" [1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d] Running
	I1024 19:46:55.615612 1145561 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-989906" [39c44a4a-7457-4c53-b822-9dfe663a2803] Running
	I1024 19:46:55.615617 1145561 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-989906" [f661d095-a92d-4613-afb6-7f92111b468e] Running
	I1024 19:46:55.615626 1145561 system_pods.go:61] "kube-proxy-tcvng" [e1f70384-ced8-4a81-89d8-e4d8dc5519b6] Running
	I1024 19:46:55.615631 1145561 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-989906" [5354673b-0e1e-45a0-addc-d7d966d03605] Running
	I1024 19:46:55.615635 1145561 system_pods.go:61] "storage-provisioner" [484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4] Running
	I1024 19:46:55.615643 1145561 system_pods.go:74] duration metric: took 179.334561ms to wait for pod list to return data ...
	I1024 19:46:55.615654 1145561 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:46:55.810050 1145561 request.go:629] Waited for 194.319829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:46:55.812303 1145561 default_sa.go:45] found service account: "default"
	I1024 19:46:55.812331 1145561 default_sa.go:55] duration metric: took 196.670644ms for default service account to be created ...
	I1024 19:46:55.812343 1145561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:46:56.009796 1145561 request.go:629] Waited for 197.345669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:46:56.016158 1145561 system_pods.go:86] 8 kube-system pods found
	I1024 19:46:56.016194 1145561 system_pods.go:89] "coredns-66bff467f8-s684d" [15285ee9-beda-4c26-b142-d521a8fd9693] Running
	I1024 19:46:56.016201 1145561 system_pods.go:89] "etcd-ingress-addon-legacy-989906" [338ae618-49b2-4df5-9fab-0fb48ef3a8cb] Running
	I1024 19:46:56.016206 1145561 system_pods.go:89] "kindnet-qsxdg" [1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d] Running
	I1024 19:46:56.016212 1145561 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-989906" [39c44a4a-7457-4c53-b822-9dfe663a2803] Running
	I1024 19:46:56.016238 1145561 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-989906" [f661d095-a92d-4613-afb6-7f92111b468e] Running
	I1024 19:46:56.016253 1145561 system_pods.go:89] "kube-proxy-tcvng" [e1f70384-ced8-4a81-89d8-e4d8dc5519b6] Running
	I1024 19:46:56.016259 1145561 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-989906" [5354673b-0e1e-45a0-addc-d7d966d03605] Running
	I1024 19:46:56.016264 1145561 system_pods.go:89] "storage-provisioner" [484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4] Running
	I1024 19:46:56.016276 1145561 system_pods.go:126] duration metric: took 203.926635ms to wait for k8s-apps to be running ...
	I1024 19:46:56.016286 1145561 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:46:56.016367 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:46:56.030260 1145561 system_svc.go:56] duration metric: took 13.9616ms WaitForService to wait for kubelet.
	I1024 19:46:56.030288 1145561 kubeadm.go:581] duration metric: took 23.452986308s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:46:56.030309 1145561 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:46:56.209624 1145561 request.go:629] Waited for 179.214283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1024 19:46:56.212427 1145561 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:46:56.212456 1145561 node_conditions.go:123] node cpu capacity is 2
	I1024 19:46:56.212467 1145561 node_conditions.go:105] duration metric: took 182.13308ms to run NodePressure ...
	I1024 19:46:56.212494 1145561 start.go:228] waiting for startup goroutines ...
	I1024 19:46:56.212505 1145561 start.go:233] waiting for cluster config update ...
	I1024 19:46:56.212515 1145561 start.go:242] writing updated cluster config ...
	I1024 19:46:56.212806 1145561 ssh_runner.go:195] Run: rm -f paused
	I1024 19:46:56.272558 1145561 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1024 19:46:56.275638 1145561 out.go:177] 
	W1024 19:46:56.278098 1145561 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1024 19:46:56.280286 1145561 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1024 19:46:56.282985 1145561 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-989906" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:51:24 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:24.329431258Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:51:35 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:35.793081575Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a09b1a44-6883-469d-b119-0571567dbef9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:51:35 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:35.793373684Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=a09b1a44-6883-469d-b119-0571567dbef9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:51:46 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:46.793074644Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=516ff418-8bc8-459d-af19-40c1a8aa1f7a name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:51:46 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:46.793367196Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=516ff418-8bc8-459d-af19-40c1a8aa1f7a name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:51:57 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:57.793102528Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=9c6a7f04-f604-47f3-98b2-2e654b671064 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:51:57 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:51:57.793380869Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=9c6a7f04-f604-47f3-98b2-2e654b671064 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:04 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:04.793123459Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=facf3d1b-83df-438c-9b1c-4aa675fb5f33 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:04 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:04.793404278Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=facf3d1b-83df-438c-9b1c-4aa675fb5f33 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:11 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:11.793113330Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ebd0ec63-6529-4cb0-b6d1-223410c5fa85 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:11 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:11.793389965Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ebd0ec63-6529-4cb0-b6d1-223410c5fa85 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:15 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:15.793025395Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=9715b2b8-b22d-43a1-9d87-14138aa8031e name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:15 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:15.793302111Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=9715b2b8-b22d-43a1-9d87-14138aa8031e name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:22.793028783Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6f8bec12-e004-4e34-90e2-329fe3c5ad9c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:22.793297869Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=6f8bec12-e004-4e34-90e2-329fe3c5ad9c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:29 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:29.793160035Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=4008d75e-3bfd-4f89-bfba-0e58be90443b name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:29 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:29.793433199Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=4008d75e-3bfd-4f89-bfba-0e58be90443b name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:37 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:37.793128218Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=410fd3ff-61e8-4b87-a17b-b335f9e22a2c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:37 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:37.793406182Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=410fd3ff-61e8-4b87-a17b-b335f9e22a2c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:44 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:44.793172868Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8d35242c-1b06-4d31-947f-7132281379c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:44 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:44.793441543Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8d35242c-1b06-4d31-947f-7132281379c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:48 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:48.793447391Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=faf39ece-bacd-4984-b04c-982517055c56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:48 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:48.793721310Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=faf39ece-bacd-4984-b04c-982517055c56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:52:48 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:48.794735541Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a58bee23-4826-423b-a218-ec3d3fe15b3d name=/runtime.v1alpha2.ImageService/PullImage
	Oct 24 19:52:48 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:52:48.796959022Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df4fd98981097       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   7c2a71b976099       storage-provisioner
	a9fb2d6cb9cec       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   0fa717c044f25       coredns-66bff467f8-s684d
	f9a74f40b715c       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   87d94f7ba6bde       kindnet-qsxdg
	efa6e1f60f591       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   6c1d10729efe6       kube-proxy-tcvng
	c8cf3612021c7       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   5d58da357b1e1       kube-scheduler-ingress-addon-legacy-989906
	d5cc6c70a928b       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   b0f34ba30add0       kube-controller-manager-ingress-addon-legacy-989906
	a247662fef54b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   f179865bc9f7d       kube-apiserver-ingress-addon-legacy-989906
	e97cc2b2bd3b1       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   4afc01bd40340       etcd-ingress-addon-legacy-989906
	
	* 
	* ==> coredns [a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:37469 - 2648 "HINFO IN 7875511511347486053.9022443650232970071. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037457384s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-989906
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-989906
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=ingress-addon-legacy-989906
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_46_18_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:46:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-989906
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:52:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-989906
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 41fcfbfa5a5e42e9a423869674535624
	  System UUID:                e7c0f77c-5c5a-4fee-892c-6f6289d58eb2
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-twz9h                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-admission-patch-wt5cm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-zvwf7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m1s
	  kube-system                 coredns-66bff467f8-s684d                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m26s
	  kube-system                 etcd-ingress-addon-legacy-989906                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 kindnet-qsxdg                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m25s
	  kube-system                 kube-apiserver-ingress-addon-legacy-989906             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-989906    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 kube-proxy-tcvng                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 kube-scheduler-ingress-addon-legacy-989906             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  6m52s (x5 over 6m52s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m52s (x5 over 6m52s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m52s (x5 over 6m52s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m38s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m38s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m38s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m25s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m18s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001163] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001044] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000080c1e564
	[  +0.001072] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +0.003112] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001176] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000010398763
	[  +0.001113] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +3.176984] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000953f0312
	[  +0.001131] FS-Cache: O-key=[8] '39643b0000000000'
	[  +0.000732] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c4f274aa
	[  +0.001081] FS-Cache: N-key=[8] '39643b0000000000'
	[  +0.310132] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a06fabf2
	[  +0.001138] FS-Cache: O-key=[8] '3f643b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=000000004c0f819e
	[  +0.001053] FS-Cache: N-key=[8] '3f643b0000000000'
	
	* 
	* ==> etcd [e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c] <==
	* raft2023/10/24 19:46:08 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/24 19:46:08 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/24 19:46:08 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/24 19:46:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:46:09.214165 W | auth: simple token is not cryptographically signed
	2023-10-24 19:46:09.305775 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:46:09.363811 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-24 19:46:09.394268 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-24 19:46:09.543176 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 19:46:09.558003 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-24 19:46:09.582091 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/24 19:46:09 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-24 19:46:09.653879 I | etcdserver: published {Name:ingress-addon-legacy-989906 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-24 19:46:09.666196 I | embed: ready to serve client requests
	2023-10-24 19:46:09.666296 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-24 19:46:09.691960 I | embed: ready to serve client requests
	2023-10-24 19:46:09.693305 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 19:46:09.693429 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-24 19:46:09.693489 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-24 19:46:10.262284 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  19:52:58 up  9:35,  0 users,  load average: 0.62, 0.42, 0.77
	Linux ingress-addon-legacy-989906 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093] <==
	* I1024 19:50:56.410631       1 main.go:227] handling current node
	I1024 19:51:06.419634       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:06.419666       1 main.go:227] handling current node
	I1024 19:51:16.423784       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:16.423811       1 main.go:227] handling current node
	I1024 19:51:26.429121       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:26.429145       1 main.go:227] handling current node
	I1024 19:51:36.432413       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:36.432445       1 main.go:227] handling current node
	I1024 19:51:46.436812       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:46.436841       1 main.go:227] handling current node
	I1024 19:51:56.440957       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:51:56.440986       1 main.go:227] handling current node
	I1024 19:52:06.452877       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:06.452906       1 main.go:227] handling current node
	I1024 19:52:16.456649       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:16.456692       1 main.go:227] handling current node
	I1024 19:52:26.463154       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:26.463183       1 main.go:227] handling current node
	I1024 19:52:36.466333       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:36.466367       1 main.go:227] handling current node
	I1024 19:52:46.469818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:46.469846       1 main.go:227] handling current node
	I1024 19:52:56.476975       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:56.477003       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756] <==
	* I1024 19:46:14.196764       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I1024 19:46:14.196779       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E1024 19:46:14.251145       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1024 19:46:14.261195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:46:14.261293       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:46:14.261607       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1024 19:46:14.261683       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1024 19:46:14.265932       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:46:15.070712       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1024 19:46:15.070742       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1024 19:46:15.091510       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1024 19:46:15.096417       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:46:15.096445       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1024 19:46:15.542121       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:46:15.590696       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1024 19:46:15.710627       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1024 19:46:15.711772       1 controller.go:609] quota admission added evaluator for: endpoints
	I1024 19:46:15.715202       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:46:16.512102       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1024 19:46:17.377027       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1024 19:46:17.484288       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1024 19:46:20.759281       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:46:32.696926       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1024 19:46:32.916294       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1024 19:46:57.240634       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9] <==
	* I1024 19:46:32.686669       1 shared_informer.go:230] Caches are synced for GC 
	I1024 19:46:32.701240       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1024 19:46:32.716036       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"70777e8d-c2c0-440e-8b79-0e7a347c3cae", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1024 19:46:32.716190       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1024 19:46:32.721861       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1024 19:46:32.735018       1 shared_informer.go:230] Caches are synced for expand 
	I1024 19:46:32.735938       1 shared_informer.go:230] Caches are synced for attach detach 
	I1024 19:46:32.758924       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"84bb21cf-905f-4df0-86c7-c55e7e5e2f57", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-s684d
	I1024 19:46:32.907819       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1024 19:46:32.949984       1 shared_informer.go:230] Caches are synced for stateful set 
	I1024 19:46:32.986162       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:46:32.989378       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:46:32.989485       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1024 19:46:33.035017       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:46:33.135050       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7cc392f0-9361-4969-bc08-8573abe57d85", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tcvng
	I1024 19:46:33.153247       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"eeb1d1c4-136d-48b7-b3b2-e207f619da49", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-qsxdg
	E1024 19:46:33.262530       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"eeb1d1c4-136d-48b7-b3b2-e207f619da49", ResourceVersion:"211", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833773577, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017d93e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017d9400)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017d9420), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017d94a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017d94e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40017ce8c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40017e4bd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000596cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40005c6cb0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40017e4c20)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1024 19:46:33.339802       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1024 19:46:33.339845       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:46:42.652232       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1024 19:46:57.228967       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"62537c9b-dc7b-49f6-8e82-3eb3eba1caee", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1024 19:46:57.245226       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"af93d955-2d5b-44ae-977a-3853f317263f", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-zvwf7
	I1024 19:46:57.286170       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"879063ad-d05c-4ead-a561-3e71067b211e", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-twz9h
	I1024 19:46:57.336503       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"402c236a-0099-413d-b33c-f34ba3ac1468", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wt5cm
	
	* 
	* ==> kube-proxy [efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865] <==
	* W1024 19:46:33.720766       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1024 19:46:33.732515       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1024 19:46:33.732641       1 server_others.go:186] Using iptables Proxier.
	I1024 19:46:33.733028       1 server.go:583] Version: v1.18.20
	I1024 19:46:33.734711       1 config.go:315] Starting service config controller
	I1024 19:46:33.734799       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1024 19:46:33.734877       1 config.go:133] Starting endpoints config controller
	I1024 19:46:33.734931       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1024 19:46:33.835325       1 shared_informer.go:230] Caches are synced for service config 
	I1024 19:46:33.837171       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c] <==
	* I1024 19:46:14.270058       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:46:14.270150       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:46:14.272637       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1024 19:46:14.274064       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:46:14.274084       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:46:14.274103       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1024 19:46:14.279644       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:46:14.279819       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:46:14.279928       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:46:14.280029       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:46:14.280132       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:46:14.280229       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:46:14.280343       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:46:14.280443       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:46:14.282185       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:46:14.282342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:46:14.282530       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:46:14.282696       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:46:15.137942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:46:15.166096       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:46:15.177169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:46:15.278302       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:46:15.298843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1024 19:46:18.274194       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1024 19:46:33.010923       1 factory.go:503] pod: kube-system/coredns-66bff467f8-s684d is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Oct 24 19:50:41 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:50:41.794228    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:51:07 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:07.418108    1622 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Oct 24 19:51:07 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:07.418216    1622 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/fc328402-38e1-4b7c-b548-97a7c4fa1536-webhook-cert podName:fc328402-38e1-4b7c-b548-97a7c4fa1536 nodeName:}" failed. No retries permitted until 2023-10-24 19:53:09.418190739 +0000 UTC m=+412.092827801 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc328402-38e1-4b7c-b548-97a7c4fa1536-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-zvwf7\" (UID: \"fc328402-38e1-4b7c-b548-97a7c4fa1536\") : secret \"ingress-nginx-admission\" not found"
	Oct 24 19:51:14 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:14.793104    1622 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-zvwf7_ingress-nginx(fc328402-38e1-4b7c-b548-97a7c4fa1536)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-77hxk]: timed out waiting for the condition; skipping pod
	Oct 24 19:51:14 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:14.793142    1622 pod_workers.go:191] Error syncing pod fc328402-38e1-4b7c-b548-97a7c4fa1536 ("ingress-nginx-controller-7fcf777cb7-zvwf7_ingress-nginx(fc328402-38e1-4b7c-b548-97a7c4fa1536)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-77hxk]: timed out waiting for the condition
	Oct 24 19:51:20 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:20.852508    1622 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26, memory: /docker/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/system.slice/kubelet.service
	Oct 24 19:51:24 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:24.326673    1622 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:24 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:24.326749    1622 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:24 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:24.326950    1622 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:24 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:24.326993    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Oct 24 19:51:35 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:35.793644    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:51:46 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:46.793542    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:51:54 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:54.612709    1622 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:54 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:54.612770    1622 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:54 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:54.612839    1622 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:51:54 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:54.612876    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Oct 24 19:51:57 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:51:57.793789    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:04 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:04.793922    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:11 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:11.793605    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:15 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:15.793525    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:22 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:22.793618    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:29 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:29.793808    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:37 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:37.793616    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:44 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:44.794109    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:52:58 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:52:58.794010    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3] <==
	* I1024 19:46:48.077897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:46:48.092104       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:46:48.092180       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:46:48.098967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:46:48.099153       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825!
	I1024 19:46:48.100140       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0998217-79cc-42eb-a003-eb345b5b1881", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825 became leader
	I1024 19:46:48.200117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-989906 -n ingress-addon-legacy-989906
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-989906 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7: exit status 1 (81.646893ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-twz9h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wt5cm" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-zvwf7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.59s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-989906 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1024 19:53:00.786705 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
addons_test.go:206: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-989906 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.071444211s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-zvwf7

                                                
                                                
** /stderr **
addons_test.go:207: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-989906
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-989906:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26",
	        "Created": "2023-10-24T19:45:41.323562437Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1146012,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T19:45:41.646832689Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/hosts",
	        "LogPath": "/var/lib/docker/containers/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26/7b10e689dc76171184b1b4facadb27dcc9fe5c54dd33fd06b9f43082a3de7b26-json.log",
	        "Name": "/ingress-addon-legacy-989906",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-989906:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-989906",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b0c0af87b8d6d0b838c30216a29c247d26811c552f6cb3d071873832d83f398/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-989906",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-989906/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-989906",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-989906",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-989906",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec763f8756e976b12ab743b6414c184b0e4c92d9c94acfe37ba7372650a18484",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34225"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34223"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34222"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec763f8756e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-989906": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7b10e689dc76",
	                        "ingress-addon-legacy-989906"
	                    ],
	                    "NetworkID": "ab20f783397596b6b2b42c66fe9839120e2f1a6a22433e710e060b2d2df080fb",
	                    "EndpointID": "41513c7ac0b0fd3a0f18525d51134f59c305d56a782425a52e2f615aea7627cb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-989906 -n ingress-addon-legacy-989906
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-989906 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-989906 logs -n 25: (1.407110507s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh findmnt                                          | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh sudo cat                                         | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | /etc/test/nested/copy/1117634/hosts                                    |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --memory                                                     |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                             |         |         |                     |                     |
	|                | --driver=docker                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --memory                                                     |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                |                             |         |         |                     |                     |
	|                | --driver=docker                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| start          | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | --dry-run --alsologtostderr                                            |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                     | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | -p functional-419430                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-419430 ssh pgrep                                            | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-419430 image build -t                                       | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | localhost/my-image:functional-419430                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-419430 image ls                                             | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-419430                                                      | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:44 UTC | 24 Oct 23 19:44 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-419430                                                   | functional-419430           | jenkins | v1.31.2 | 24 Oct 23 19:45 UTC | 24 Oct 23 19:45 UTC |
	| start          | -p ingress-addon-legacy-989906                                         | ingress-addon-legacy-989906 | jenkins | v1.31.2 | 24 Oct 23 19:45 UTC | 24 Oct 23 19:46 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-989906                                            | ingress-addon-legacy-989906 | jenkins | v1.31.2 | 24 Oct 23 19:46 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-989906                                            | ingress-addon-legacy-989906 | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC | 24 Oct 23 19:53 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:45:16
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:45:16.720657 1145561 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:45:16.720861 1145561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:45:16.720876 1145561 out.go:309] Setting ErrFile to fd 2...
	I1024 19:45:16.720883 1145561 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:45:16.721227 1145561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:45:16.721862 1145561 out.go:303] Setting JSON to false
	I1024 19:45:16.722777 1145561 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34066,"bootTime":1698142651,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:45:16.722857 1145561 start.go:138] virtualization:  
	I1024 19:45:16.725929 1145561 out.go:177] * [ingress-addon-legacy-989906] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:45:16.728980 1145561 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:45:16.729138 1145561 notify.go:220] Checking for updates...
	I1024 19:45:16.733355 1145561 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:45:16.735713 1145561 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:45:16.737820 1145561 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:45:16.740181 1145561 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:45:16.742425 1145561 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:45:16.744791 1145561 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:45:16.769699 1145561 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:45:16.769865 1145561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:45:16.855969 1145561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 19:45:16.845921506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:45:16.856079 1145561 docker.go:295] overlay module found
	I1024 19:45:16.860454 1145561 out.go:177] * Using the docker driver based on user configuration
	I1024 19:45:16.862609 1145561 start.go:298] selected driver: docker
	I1024 19:45:16.862626 1145561 start.go:902] validating driver "docker" against <nil>
	I1024 19:45:16.862693 1145561 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:45:16.863316 1145561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:45:16.934791 1145561 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 19:45:16.924184492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:45:16.934946 1145561 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:45:16.935178 1145561 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:45:16.937321 1145561 out.go:177] * Using Docker driver with root privileges
	I1024 19:45:16.939553 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:45:16.939571 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:45:16.939587 1145561 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:45:16.939597 1145561 start_flags.go:323] config:
	{Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:45:16.942767 1145561 out.go:177] * Starting control plane node ingress-addon-legacy-989906 in cluster ingress-addon-legacy-989906
	I1024 19:45:16.945086 1145561 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:45:16.947576 1145561 out.go:177] * Pulling base image ...
	I1024 19:45:16.950006 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:16.950106 1145561 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:45:16.967575 1145561 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 19:45:16.967600 1145561 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 19:45:17.033828 1145561 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1024 19:45:17.033865 1145561 cache.go:57] Caching tarball of preloaded images
	I1024 19:45:17.034044 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:17.037482 1145561 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1024 19:45:17.041220 1145561 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:17.158833 1145561 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1024 19:45:33.344124 1145561 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:33.344224 1145561 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:45:34.602993 1145561 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1024 19:45:34.603380 1145561 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json ...
	I1024 19:45:34.603416 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json: {Name:mk8d09da22c56b346f06e446c8fe836fdf8fc271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:34.603625 1145561 cache.go:195] Successfully downloaded all kic artifacts
	I1024 19:45:34.603685 1145561 start.go:365] acquiring machines lock for ingress-addon-legacy-989906: {Name:mk8d4eab24c712234aec5d6857de53c99eec40c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:45:34.603762 1145561 start.go:369] acquired machines lock for "ingress-addon-legacy-989906" in 61.342µs
	I1024 19:45:34.603788 1145561 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:45:34.603888 1145561 start.go:125] createHost starting for "" (driver="docker")
	I1024 19:45:34.606696 1145561 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1024 19:45:34.607039 1145561 start.go:159] libmachine.API.Create for "ingress-addon-legacy-989906" (driver="docker")
	I1024 19:45:34.607070 1145561 client.go:168] LocalClient.Create starting
	I1024 19:45:34.607142 1145561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 19:45:34.607184 1145561 main.go:141] libmachine: Decoding PEM data...
	I1024 19:45:34.607206 1145561 main.go:141] libmachine: Parsing certificate...
	I1024 19:45:34.607305 1145561 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 19:45:34.607331 1145561 main.go:141] libmachine: Decoding PEM data...
	I1024 19:45:34.607355 1145561 main.go:141] libmachine: Parsing certificate...
	I1024 19:45:34.607758 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 19:45:34.626013 1145561 cli_runner.go:211] docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 19:45:34.626111 1145561 network_create.go:281] running [docker network inspect ingress-addon-legacy-989906] to gather additional debugging logs...
	I1024 19:45:34.626134 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906
	W1024 19:45:34.643204 1145561 cli_runner.go:211] docker network inspect ingress-addon-legacy-989906 returned with exit code 1
	I1024 19:45:34.643237 1145561 network_create.go:284] error running [docker network inspect ingress-addon-legacy-989906]: docker network inspect ingress-addon-legacy-989906: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-989906 not found
	I1024 19:45:34.643253 1145561 network_create.go:286] output of [docker network inspect ingress-addon-legacy-989906]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-989906 not found
	
	** /stderr **
	I1024 19:45:34.643364 1145561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:45:34.665093 1145561 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40006e1c40}
	I1024 19:45:34.665130 1145561 network_create.go:124] attempt to create docker network ingress-addon-legacy-989906 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1024 19:45:34.665194 1145561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 ingress-addon-legacy-989906
	I1024 19:45:34.737197 1145561 network_create.go:108] docker network ingress-addon-legacy-989906 192.168.49.0/24 created
	I1024 19:45:34.737230 1145561 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-989906" container
	I1024 19:45:34.737312 1145561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 19:45:34.754218 1145561 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-989906 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --label created_by.minikube.sigs.k8s.io=true
	I1024 19:45:34.773288 1145561 oci.go:103] Successfully created a docker volume ingress-addon-legacy-989906
	I1024 19:45:34.773370 1145561 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-989906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --entrypoint /usr/bin/test -v ingress-addon-legacy-989906:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 19:45:36.316739 1145561 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-989906-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --entrypoint /usr/bin/test -v ingress-addon-legacy-989906:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib: (1.543298122s)
	I1024 19:45:36.316767 1145561 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-989906
	I1024 19:45:36.316793 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:36.316813 1145561 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 19:45:36.316909 1145561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-989906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 19:45:41.236328 1145561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-989906:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.919358735s)
	I1024 19:45:41.236361 1145561 kic.go:200] duration metric: took 4.919545 seconds to extract preloaded images to volume
	W1024 19:45:41.236503 1145561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 19:45:41.236625 1145561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 19:45:41.307096 1145561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-989906 --name ingress-addon-legacy-989906 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-989906 --network ingress-addon-legacy-989906 --ip 192.168.49.2 --volume ingress-addon-legacy-989906:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:45:41.655653 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Running}}
	I1024 19:45:41.683958 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:41.708311 1145561 cli_runner.go:164] Run: docker exec ingress-addon-legacy-989906 stat /var/lib/dpkg/alternatives/iptables
	I1024 19:45:41.798641 1145561 oci.go:144] the created container "ingress-addon-legacy-989906" has a running status.
	I1024 19:45:41.798671 1145561 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa...
	I1024 19:45:42.073358 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 19:45:42.073461 1145561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 19:45:42.110141 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:42.148426 1145561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 19:45:42.148445 1145561 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-989906 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 19:45:42.253345 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:45:42.280808 1145561 machine.go:88] provisioning docker machine ...
	I1024 19:45:42.280840 1145561 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-989906"
	I1024 19:45:42.280909 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:42.306551 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:42.307027 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:42.307042 1145561 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-989906 && echo "ingress-addon-legacy-989906" | sudo tee /etc/hostname
	I1024 19:45:42.307672 1145561 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1024 19:45:45.460134 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-989906
	
	I1024 19:45:45.460219 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:45.479965 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:45.480374 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:45.480398 1145561 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-989906' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-989906/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-989906' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:45:45.618651 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:45:45.618682 1145561 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 19:45:45.618717 1145561 ubuntu.go:177] setting up certificates
	I1024 19:45:45.618726 1145561 provision.go:83] configureAuth start
	I1024 19:45:45.618787 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:45.636613 1145561 provision.go:138] copyHostCerts
	I1024 19:45:45.636660 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 19:45:45.636689 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 19:45:45.636700 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 19:45:45.636778 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 19:45:45.636861 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 19:45:45.636885 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 19:45:45.636893 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 19:45:45.636919 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 19:45:45.636960 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 19:45:45.636981 1145561 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 19:45:45.636988 1145561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 19:45:45.637012 1145561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 19:45:45.637059 1145561 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-989906 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-989906]
	I1024 19:45:45.983782 1145561 provision.go:172] copyRemoteCerts
	I1024 19:45:45.983851 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:45:45.983894 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.008964 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.108955 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 19:45:46.109013 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:45:46.137220 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 19:45:46.137288 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 19:45:46.165791 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 19:45:46.165857 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:45:46.193331 1145561 provision.go:86] duration metric: configureAuth took 574.587724ms
	I1024 19:45:46.193357 1145561 ubuntu.go:193] setting minikube options for container-runtime
	I1024 19:45:46.193548 1145561 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:45:46.193664 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.212236 1145561 main.go:141] libmachine: Using SSH client type: native
	I1024 19:45:46.212677 1145561 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34225 <nil> <nil>}
	I1024 19:45:46.212700 1145561 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 19:45:46.496361 1145561 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 19:45:46.496383 1145561 machine.go:91] provisioned docker machine in 4.21555483s
	I1024 19:45:46.496393 1145561 client.go:171] LocalClient.Create took 11.889317728s
	I1024 19:45:46.496414 1145561 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-989906" took 11.889367549s
	I1024 19:45:46.496425 1145561 start.go:300] post-start starting for "ingress-addon-legacy-989906" (driver="docker")
	I1024 19:45:46.496435 1145561 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:45:46.496507 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:45:46.496565 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.515970 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.616346 1145561 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:45:46.620314 1145561 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 19:45:46.620391 1145561 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 19:45:46.620410 1145561 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 19:45:46.620417 1145561 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 19:45:46.620430 1145561 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 19:45:46.620503 1145561 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 19:45:46.620594 1145561 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 19:45:46.620605 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /etc/ssl/certs/11176342.pem
	I1024 19:45:46.620715 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:45:46.630640 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:45:46.657719 1145561 start.go:303] post-start completed in 161.27768ms
	I1024 19:45:46.658095 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:46.676059 1145561 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/config.json ...
	I1024 19:45:46.676327 1145561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:45:46.676377 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.694181 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.787492 1145561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 19:45:46.793034 1145561 start.go:128] duration metric: createHost completed in 12.189125393s
	I1024 19:45:46.793059 1145561 start.go:83] releasing machines lock for "ingress-addon-legacy-989906", held for 12.18928417s
	I1024 19:45:46.793152 1145561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-989906
	I1024 19:45:46.811259 1145561 ssh_runner.go:195] Run: cat /version.json
	I1024 19:45:46.811316 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.811554 1145561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:45:46.811617 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:45:46.832859 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.840696 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:45:46.926099 1145561 ssh_runner.go:195] Run: systemctl --version
	I1024 19:45:47.063105 1145561 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 19:45:47.209602 1145561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 19:45:47.215031 1145561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:45:47.241154 1145561 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 19:45:47.241257 1145561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:45:47.278613 1145561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 19:45:47.278634 1145561 start.go:472] detecting cgroup driver to use...
	I1024 19:45:47.278693 1145561 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 19:45:47.278767 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:45:47.297632 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:45:47.311228 1145561 docker.go:198] disabling cri-docker service (if available) ...
	I1024 19:45:47.311297 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 19:45:47.327106 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 19:45:47.343781 1145561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 19:45:47.444787 1145561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 19:45:47.554033 1145561 docker.go:214] disabling docker service ...
	I1024 19:45:47.554139 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 19:45:47.575780 1145561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 19:45:47.589988 1145561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 19:45:47.693945 1145561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 19:45:47.805962 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 19:45:47.819422 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:45:47.839206 1145561 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 19:45:47.839271 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.851290 1145561 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 19:45:47.851361 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.863486 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.875288 1145561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 19:45:47.886752 1145561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:45:47.897673 1145561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:45:47.908072 1145561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:45:47.918290 1145561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:45:48.005278 1145561 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 19:45:48.134108 1145561 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 19:45:48.134235 1145561 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 19:45:48.139234 1145561 start.go:540] Will wait 60s for crictl version
	I1024 19:45:48.139332 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:48.143894 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:45:48.188700 1145561 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 19:45:48.188804 1145561 ssh_runner.go:195] Run: crio --version
	I1024 19:45:48.233531 1145561 ssh_runner.go:195] Run: crio --version
	I1024 19:45:48.277152 1145561 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1024 19:45:48.279249 1145561 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-989906 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 19:45:48.296922 1145561 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1024 19:45:48.301639 1145561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:45:48.314706 1145561 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1024 19:45:48.314776 1145561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:45:48.372347 1145561 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:45:48.372418 1145561 ssh_runner.go:195] Run: which lz4
	I1024 19:45:48.377023 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1024 19:45:48.377156 1145561 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1024 19:45:48.381726 1145561 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1024 19:45:48.381782 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1024 19:45:50.474362 1145561 crio.go:444] Took 2.097254 seconds to copy over tarball
	I1024 19:45:50.474429 1145561 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1024 19:45:53.136608 1145561 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.662154995s)
	I1024 19:45:53.136685 1145561 crio.go:451] Took 2.662300 seconds to extract the tarball
	I1024 19:45:53.136709 1145561 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1024 19:45:53.315387 1145561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 19:45:53.365830 1145561 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1024 19:45:53.365859 1145561 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1024 19:45:53.365909 1145561 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:53.366123 1145561 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.366196 1145561 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.366278 1145561 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.366358 1145561 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.366432 1145561 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1024 19:45:53.366503 1145561 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.366571 1145561 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.367488 1145561 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1024 19:45:53.367948 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.368145 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.368307 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.368435 1145561 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.368573 1145561 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:53.368812 1145561 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.369924 1145561 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	W1024 19:45:53.676244 1145561 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.676498 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1024 19:45:53.699165 1145561 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.699407 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1024 19:45:53.704193 1145561 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.704430 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1024 19:45:53.737104 1145561 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.737363 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.739127 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1024 19:45:53.748276 1145561 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.748620 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.757956 1145561 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1024 19:45:53.758094 1145561 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.758184 1145561 ssh_runner.go:195] Run: which crictl
	W1024 19:45:53.766827 1145561 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:53.767121 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.884078 1145561 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1024 19:45:53.884321 1145561 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.884395 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884185 1145561 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1024 19:45:53.884505 1145561 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.884541 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884241 1145561 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1024 19:45:53.884618 1145561 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.884684 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.884292 1145561 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1024 19:45:53.884763 1145561 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1024 19:45:53.884812 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.894435 1145561 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1024 19:45:53.894525 1145561 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.894602 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.894721 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1024 19:45:53.916428 1145561 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1024 19:45:53.916519 1145561 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1024 19:45:53.916607 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:53.918123 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1024 19:45:53.918254 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1024 19:45:53.918343 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1024 19:45:53.918652 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1024 19:45:53.980993 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1024 19:45:53.981070 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1024 19:45:53.981134 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	W1024 19:45:54.090868 1145561 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1024 19:45:54.091117 1145561 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.100474 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1024 19:45:54.100625 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1024 19:45:54.100701 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1024 19:45:54.100772 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1024 19:45:54.100872 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1024 19:45:54.100950 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1024 19:45:54.243851 1145561 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1024 19:45:54.243907 1145561 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.243965 1145561 ssh_runner.go:195] Run: which crictl
	I1024 19:45:54.248278 1145561 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:45:54.312478 1145561 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1024 19:45:54.312574 1145561 cache_images.go:92] LoadImages completed in 946.699962ms
	W1024 19:45:54.312655 1145561 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1024 19:45:54.312743 1145561 ssh_runner.go:195] Run: crio config
	I1024 19:45:54.372541 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:45:54.372564 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:45:54.372595 1145561 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 19:45:54.372619 1145561 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-989906 NodeName:ingress-addon-legacy-989906 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1024 19:45:54.372763 1145561 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-989906"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:45:54.372844 1145561 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-989906 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:45:54.372922 1145561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1024 19:45:54.383408 1145561 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:45:54.383480 1145561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:45:54.393944 1145561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1024 19:45:54.415237 1145561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1024 19:45:54.437261 1145561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1024 19:45:54.458547 1145561 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1024 19:45:54.462717 1145561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:45:54.476274 1145561 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906 for IP: 192.168.49.2
	I1024 19:45:54.476309 1145561 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.476469 1145561 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 19:45:54.476515 1145561 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 19:45:54.476565 1145561 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key
	I1024 19:45:54.476580 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt with IP's: []
	I1024 19:45:54.851770 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt ...
	I1024 19:45:54.851803 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: {Name:mke51ccdea6d497cc04aa9302cd9e407423a2605 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.852030 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key ...
	I1024 19:45:54.852045 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key: {Name:mk73d3af778ef013d579b8c7642b297d2f6d3187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:54.852144 1145561 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2
	I1024 19:45:54.852162 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 19:45:55.428504 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 ...
	I1024 19:45:55.428537 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2: {Name:mkb11342dc132c9193a45c52d9d5f1361f2e75a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.428727 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2 ...
	I1024 19:45:55.428740 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2: {Name:mk499274b62d050f26f22ab6c93f6503aa0b4c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.428826 1145561 certs.go:337] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt
	I1024 19:45:55.428907 1145561 certs.go:341] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key
	I1024 19:45:55.428964 1145561 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key
	I1024 19:45:55.428980 1145561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt with IP's: []
	I1024 19:45:55.611544 1145561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt ...
	I1024 19:45:55.611576 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt: {Name:mk0f4b8a40022dbc1abe2d378501c36f4803e0ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.611765 1145561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key ...
	I1024 19:45:55.611778 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key: {Name:mk79e709796ffc3e86f935d3971a5ee1618306ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:45:55.611864 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 19:45:55.611885 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 19:45:55.611898 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 19:45:55.611913 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 19:45:55.611927 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 19:45:55.611940 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 19:45:55.611960 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 19:45:55.611974 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 19:45:55.612025 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem (1338 bytes)
	W1024 19:45:55.612061 1145561 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634_empty.pem, impossibly tiny 0 bytes
	I1024 19:45:55.612074 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 19:45:55.612100 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:45:55.612127 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:45:55.612157 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 19:45:55.612209 1145561 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 19:45:55.612240 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.612255 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:55.612266 1145561 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem -> /usr/share/ca-certificates/1117634.pem
	I1024 19:45:55.612851 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:45:55.641523 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1024 19:45:55.670346 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:45:55.698761 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:45:55.726959 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:45:55.754846 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 19:45:55.782763 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:45:55.810992 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:45:55.839053 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /usr/share/ca-certificates/11176342.pem (1708 bytes)
	I1024 19:45:55.866971 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:45:55.894266 1145561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem --> /usr/share/ca-certificates/1117634.pem (1338 bytes)
	I1024 19:45:55.921997 1145561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:45:55.942775 1145561 ssh_runner.go:195] Run: openssl version
	I1024 19:45:55.949730 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176342.pem && ln -fs /usr/share/ca-certificates/11176342.pem /etc/ssl/certs/11176342.pem"
	I1024 19:45:55.961158 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.965868 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.965985 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176342.pem
	I1024 19:45:55.974471 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11176342.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:45:55.985845 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:45:55.996959 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.002035 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.002115 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:45:56.011228 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:45:56.023152 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117634.pem && ln -fs /usr/share/ca-certificates/1117634.pem /etc/ssl/certs/1117634.pem"
	I1024 19:45:56.034907 1145561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.039720 1145561 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.039825 1145561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117634.pem
	I1024 19:45:56.048502 1145561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117634.pem /etc/ssl/certs/51391683.0"
	I1024 19:45:56.060166 1145561 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:45:56.064544 1145561 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 19:45:56.064606 1145561 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-989906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-989906 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:45:56.064681 1145561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 19:45:56.064740 1145561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 19:45:56.106791 1145561 cri.go:89] found id: ""
	I1024 19:45:56.106863 1145561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:45:56.118347 1145561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:45:56.129409 1145561 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 19:45:56.129474 1145561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:45:56.140213 1145561 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:45:56.140256 1145561 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 19:45:56.196998 1145561 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1024 19:45:56.197380 1145561 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:45:56.246414 1145561 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 19:45:56.246506 1145561 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1024 19:45:56.246554 1145561 kubeadm.go:322] OS: Linux
	I1024 19:45:56.246599 1145561 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 19:45:56.246650 1145561 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 19:45:56.246708 1145561 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 19:45:56.246758 1145561 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 19:45:56.246806 1145561 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 19:45:56.246855 1145561 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 19:45:56.340653 1145561 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:45:56.340837 1145561 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:45:56.340980 1145561 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:45:56.581821 1145561 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:45:56.583447 1145561 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:45:56.583522 1145561 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:45:56.686213 1145561 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:45:56.691516 1145561 out.go:204]   - Generating certificates and keys ...
	I1024 19:45:56.691608 1145561 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:45:56.691677 1145561 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:45:57.073771 1145561 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 19:45:57.325473 1145561 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 19:45:57.895675 1145561 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 19:45:58.203876 1145561 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 19:45:58.836869 1145561 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 19:45:58.837278 1145561 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-989906 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:45:59.839841 1145561 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 19:45:59.840037 1145561 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-989906 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1024 19:46:00.500172 1145561 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 19:46:00.632610 1145561 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 19:46:01.189864 1145561 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 19:46:01.190405 1145561 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:46:01.513004 1145561 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:46:03.139968 1145561 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:46:03.322051 1145561 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:46:03.508173 1145561 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:46:03.508810 1145561 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:46:03.511088 1145561 out.go:204]   - Booting up control plane ...
	I1024 19:46:03.511210 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:46:03.519336 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:46:03.521266 1145561 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:46:03.522559 1145561 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:46:03.525451 1145561 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:46:16.027962 1145561 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502449 seconds
	I1024 19:46:16.028077 1145561 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:46:16.043547 1145561 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:46:16.564122 1145561 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:46:16.564263 1145561 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-989906 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 19:46:17.073109 1145561 kubeadm.go:322] [bootstrap-token] Using token: 2outf5.n916kh4lzbdmpqo8
	I1024 19:46:17.075832 1145561 out.go:204]   - Configuring RBAC rules ...
	I1024 19:46:17.075950 1145561 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:46:17.079118 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:46:17.087051 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:46:17.090030 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:46:17.092840 1145561 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:46:17.095373 1145561 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:46:17.106816 1145561 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:46:17.415673 1145561 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:46:17.495799 1145561 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:46:17.497729 1145561 kubeadm.go:322] 
	I1024 19:46:17.497816 1145561 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:46:17.497830 1145561 kubeadm.go:322] 
	I1024 19:46:17.497917 1145561 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:46:17.497927 1145561 kubeadm.go:322] 
	I1024 19:46:17.497952 1145561 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:46:17.498012 1145561 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:46:17.498065 1145561 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:46:17.498077 1145561 kubeadm.go:322] 
	I1024 19:46:17.498130 1145561 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:46:17.498203 1145561 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:46:17.498270 1145561 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:46:17.498278 1145561 kubeadm.go:322] 
	I1024 19:46:17.498357 1145561 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:46:17.498433 1145561 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:46:17.498443 1145561 kubeadm.go:322] 
	I1024 19:46:17.498525 1145561 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 2outf5.n916kh4lzbdmpqo8 \
	I1024 19:46:17.498628 1145561 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 19:46:17.498653 1145561 kubeadm.go:322]     --control-plane 
	I1024 19:46:17.498661 1145561 kubeadm.go:322] 
	I1024 19:46:17.498741 1145561 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:46:17.498752 1145561 kubeadm.go:322] 
	I1024 19:46:17.498834 1145561 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 2outf5.n916kh4lzbdmpqo8 \
	I1024 19:46:17.498942 1145561 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 19:46:17.501874 1145561 kubeadm.go:322] W1024 19:45:56.196096    1223 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1024 19:46:17.502160 1145561 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 19:46:17.502379 1145561 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:46:17.502514 1145561 kubeadm.go:322] W1024 19:46:03.519583    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:46:17.502649 1145561 kubeadm.go:322] W1024 19:46:03.521144    1223 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1024 19:46:17.502661 1145561 cni.go:84] Creating CNI manager for ""
	I1024 19:46:17.502669 1145561 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:46:17.505170 1145561 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 19:46:17.507197 1145561 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 19:46:17.512795 1145561 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1024 19:46:17.512821 1145561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 19:46:17.537117 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 19:46:18.006525 1145561 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:46:18.006680 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.006779 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=ingress-addon-legacy-989906 minikube.k8s.io/updated_at=2023_10_24T19_46_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.027367 1145561 ops.go:34] apiserver oom_adj: -16
	I1024 19:46:18.162188 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.253679 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:18.845656 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:19.345358 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:19.845591 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:20.345891 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:20.845163 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:21.345991 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:21.845959 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:22.345252 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:22.845200 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:23.345998 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:23.845423 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:24.345637 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:24.845261 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:25.345762 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:25.845500 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:26.346161 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:26.845517 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:27.345548 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:27.845447 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:28.345956 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:28.845142 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:29.345169 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:29.846117 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:30.346027 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:30.845581 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:31.345307 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:31.845160 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:32.345407 1145561 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:46:32.500882 1145561 kubeadm.go:1081] duration metric: took 14.494248429s to wait for elevateKubeSystemPrivileges.
	I1024 19:46:32.500914 1145561 kubeadm.go:406] StartCluster complete in 36.436320737s
	I1024 19:46:32.500932 1145561 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:46:32.500998 1145561 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:46:32.501765 1145561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:46:32.502502 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.502831 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:46:32.503090 1145561 config.go:182] Loaded profile config "ingress-addon-legacy-989906": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1024 19:46:32.503256 1145561 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:46:32.503329 1145561 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-989906"
	I1024 19:46:32.503346 1145561 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-989906"
	I1024 19:46:32.503404 1145561 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:32.503868 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.504308 1145561 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-989906"
	I1024 19:46:32.504329 1145561 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-989906"
	I1024 19:46:32.504585 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.505609 1145561 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 19:46:32.546692 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.546976 1145561 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-989906"
	I1024 19:46:32.547005 1145561 host.go:66] Checking if "ingress-addon-legacy-989906" exists ...
	I1024 19:46:32.547456 1145561 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-989906 --format={{.State.Status}}
	I1024 19:46:32.559549 1145561 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:46:32.567381 1145561 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:46:32.567408 1145561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:46:32.567471 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:32.577236 1145561 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-989906" context rescaled to 1 replicas
	I1024 19:46:32.577275 1145561 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 19:46:32.579112 1145561 out.go:177] * Verifying Kubernetes components...
	I1024 19:46:32.581686 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:46:32.596317 1145561 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:46:32.596337 1145561 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:46:32.596398 1145561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-989906
	I1024 19:46:32.611116 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:32.643420 1145561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34225 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/ingress-addon-legacy-989906/id_rsa Username:docker}
	I1024 19:46:32.765053 1145561 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:46:32.765792 1145561 kapi.go:59] client config for ingress-addon-legacy-989906: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 19:46:32.766075 1145561 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-989906" to be "Ready" ...
	I1024 19:46:32.770985 1145561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:46:32.857891 1145561 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:46:33.255390 1145561 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1024 19:46:33.345619 1145561 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 19:46:33.347744 1145561 addons.go:502] enable addons completed in 844.478116ms: enabled=[storage-provisioner default-storageclass]
	I1024 19:46:34.996297 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:37.495645 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:39.995467 1145561 node_ready.go:58] node "ingress-addon-legacy-989906" has status "Ready":"False"
	I1024 19:46:40.995252 1145561 node_ready.go:49] node "ingress-addon-legacy-989906" has status "Ready":"True"
	I1024 19:46:40.995281 1145561 node_ready.go:38] duration metric: took 8.229186421s waiting for node "ingress-addon-legacy-989906" to be "Ready" ...
	I1024 19:46:40.995292 1145561 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:46:41.002885 1145561 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-s684d" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:43.012046 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-24 19:46:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1024 19:46:45.017718 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:47.514912 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:50.014512 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:52.514688 1145561 pod_ready.go:102] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"False"
	I1024 19:46:55.015069 1145561 pod_ready.go:92] pod "coredns-66bff467f8-s684d" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.015096 1145561 pod_ready.go:81] duration metric: took 14.012103217s waiting for pod "coredns-66bff467f8-s684d" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.015108 1145561 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.020395 1145561 pod_ready.go:92] pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.020425 1145561 pod_ready.go:81] duration metric: took 5.308414ms waiting for pod "etcd-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.020441 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.026289 1145561 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.026316 1145561 pod_ready.go:81] duration metric: took 5.866508ms waiting for pod "kube-apiserver-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.026330 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.031915 1145561 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.031942 1145561 pod_ready.go:81] duration metric: took 5.603289ms waiting for pod "kube-controller-manager-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.031956 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tcvng" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.036764 1145561 pod_ready.go:92] pod "kube-proxy-tcvng" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.036793 1145561 pod_ready.go:81] duration metric: took 4.811046ms waiting for pod "kube-proxy-tcvng" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.036806 1145561 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.210165 1145561 request.go:629] Waited for 173.266799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-989906
	I1024 19:46:55.410361 1145561 request.go:629] Waited for 197.379868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-989906
	I1024 19:46:55.413072 1145561 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace has status "Ready":"True"
	I1024 19:46:55.413097 1145561 pod_ready.go:81] duration metric: took 376.28353ms waiting for pod "kube-scheduler-ingress-addon-legacy-989906" in "kube-system" namespace to be "Ready" ...
	I1024 19:46:55.413110 1145561 pod_ready.go:38] duration metric: took 14.417800427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:46:55.413130 1145561 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:46:55.413187 1145561 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:46:55.426479 1145561 api_server.go:72] duration metric: took 22.849159014s to wait for apiserver process to appear ...
	I1024 19:46:55.426504 1145561 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:46:55.426520 1145561 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1024 19:46:55.435427 1145561 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1024 19:46:55.436267 1145561 api_server.go:141] control plane version: v1.18.20
	I1024 19:46:55.436293 1145561 api_server.go:131] duration metric: took 9.781164ms to wait for apiserver health ...
	I1024 19:46:55.436302 1145561 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:46:55.609674 1145561 request.go:629] Waited for 173.287402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:46:55.615527 1145561 system_pods.go:59] 8 kube-system pods found
	I1024 19:46:55.615565 1145561 system_pods.go:61] "coredns-66bff467f8-s684d" [15285ee9-beda-4c26-b142-d521a8fd9693] Running
	I1024 19:46:55.615573 1145561 system_pods.go:61] "etcd-ingress-addon-legacy-989906" [338ae618-49b2-4df5-9fab-0fb48ef3a8cb] Running
	I1024 19:46:55.615606 1145561 system_pods.go:61] "kindnet-qsxdg" [1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d] Running
	I1024 19:46:55.615612 1145561 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-989906" [39c44a4a-7457-4c53-b822-9dfe663a2803] Running
	I1024 19:46:55.615617 1145561 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-989906" [f661d095-a92d-4613-afb6-7f92111b468e] Running
	I1024 19:46:55.615626 1145561 system_pods.go:61] "kube-proxy-tcvng" [e1f70384-ced8-4a81-89d8-e4d8dc5519b6] Running
	I1024 19:46:55.615631 1145561 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-989906" [5354673b-0e1e-45a0-addc-d7d966d03605] Running
	I1024 19:46:55.615635 1145561 system_pods.go:61] "storage-provisioner" [484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4] Running
	I1024 19:46:55.615643 1145561 system_pods.go:74] duration metric: took 179.334561ms to wait for pod list to return data ...
	I1024 19:46:55.615654 1145561 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:46:55.810050 1145561 request.go:629] Waited for 194.319829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 19:46:55.812303 1145561 default_sa.go:45] found service account: "default"
	I1024 19:46:55.812331 1145561 default_sa.go:55] duration metric: took 196.670644ms for default service account to be created ...
	I1024 19:46:55.812343 1145561 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:46:56.009796 1145561 request.go:629] Waited for 197.345669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1024 19:46:56.016158 1145561 system_pods.go:86] 8 kube-system pods found
	I1024 19:46:56.016194 1145561 system_pods.go:89] "coredns-66bff467f8-s684d" [15285ee9-beda-4c26-b142-d521a8fd9693] Running
	I1024 19:46:56.016201 1145561 system_pods.go:89] "etcd-ingress-addon-legacy-989906" [338ae618-49b2-4df5-9fab-0fb48ef3a8cb] Running
	I1024 19:46:56.016206 1145561 system_pods.go:89] "kindnet-qsxdg" [1a50c0d6-271a-4e41-b2d1-fd3f68c12d0d] Running
	I1024 19:46:56.016212 1145561 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-989906" [39c44a4a-7457-4c53-b822-9dfe663a2803] Running
	I1024 19:46:56.016238 1145561 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-989906" [f661d095-a92d-4613-afb6-7f92111b468e] Running
	I1024 19:46:56.016253 1145561 system_pods.go:89] "kube-proxy-tcvng" [e1f70384-ced8-4a81-89d8-e4d8dc5519b6] Running
	I1024 19:46:56.016259 1145561 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-989906" [5354673b-0e1e-45a0-addc-d7d966d03605] Running
	I1024 19:46:56.016264 1145561 system_pods.go:89] "storage-provisioner" [484e73f7-9ee7-42a0-b5fd-7b38d85eb8b4] Running
	I1024 19:46:56.016276 1145561 system_pods.go:126] duration metric: took 203.926635ms to wait for k8s-apps to be running ...
	I1024 19:46:56.016286 1145561 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:46:56.016367 1145561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:46:56.030260 1145561 system_svc.go:56] duration metric: took 13.9616ms WaitForService to wait for kubelet.
	I1024 19:46:56.030288 1145561 kubeadm.go:581] duration metric: took 23.452986308s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:46:56.030309 1145561 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:46:56.209624 1145561 request.go:629] Waited for 179.214283ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1024 19:46:56.212427 1145561 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 19:46:56.212456 1145561 node_conditions.go:123] node cpu capacity is 2
	I1024 19:46:56.212467 1145561 node_conditions.go:105] duration metric: took 182.13308ms to run NodePressure ...
	I1024 19:46:56.212494 1145561 start.go:228] waiting for startup goroutines ...
	I1024 19:46:56.212505 1145561 start.go:233] waiting for cluster config update ...
	I1024 19:46:56.212515 1145561 start.go:242] writing updated cluster config ...
	I1024 19:46:56.212806 1145561 ssh_runner.go:195] Run: rm -f paused
	I1024 19:46:56.272558 1145561 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1024 19:46:56.275638 1145561 out.go:177] 
	W1024 19:46:56.278098 1145561 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1024 19:46:56.280286 1145561 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1024 19:46:56.282985 1145561 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-989906" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 19:53:11 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:11.793383290Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b0f0c368-9bf8-4efd-82af-752b3c4f39ff name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:14 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:14.793050622Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=b67bcdd4-c2df-4c83-a9a5-6ef5b91e5176 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:22.793234618Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=bf90d2fb-1e10-441b-b354-33c8e155bf2c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:22.793502686Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=bf90d2fb-1e10-441b-b354-33c8e155bf2c name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:22.794097301Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6d5ce0ef-767c-48e6-8ef7-7f8e8985f0cf name=/runtime.v1alpha2.ImageService/PullImage
	Oct 24 19:53:22 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:22.799187845Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:53:29 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:29.793286155Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=386f5f15-84ea-40eb-a94d-180a77feb8f5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:34 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:34.793276622Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8f5a16b3-467e-4c45-a403-65a2293bcaf9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:34 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:34.793547767Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8f5a16b3-467e-4c45-a403-65a2293bcaf9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:42 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:42.793206626Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=9ca4641e-6f49-4a04-9d3f-6581fc3320b0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:49 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:49.793105564Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=fa6e346d-a782-4f32-8f0c-16ac596b2ab3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:49 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:49.793378243Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=fa6e346d-a782-4f32-8f0c-16ac596b2ab3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:53:56 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:53:56.793177002Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=dce0ab83-3b1d-47db-b43b-c6d2dcd4b318 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:04 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:04.793120090Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=d59296db-361f-42c1-894d-0ceaa01a1dc6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:04 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:04.793384261Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=d59296db-361f-42c1-894d-0ceaa01a1dc6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:07 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:07.793075739Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8ead86c1-67c2-4377-9a0a-1e3b4ac599e7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:07 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:07.793360661Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=8ead86c1-67c2-4377-9a0a-1e3b4ac599e7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:10 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:10.793230936Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=7c8aae9a-0969-410f-bb87-6e7475349ae0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:19 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:19.793304663Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e2667089-05ae-4c0c-a1a7-cf22b3d21ae2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:19 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:19.793642261Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e2667089-05ae-4c0c-a1a7-cf22b3d21ae2 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:20 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:20.793809124Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=21e352f0-c450-4d8a-aeff-40543ddf0148 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:20 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:20.794070694Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=21e352f0-c450-4d8a-aeff-40543ddf0148 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:25 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:25.793109995Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=8532f6ca-e00f-4ea8-bb3e-8978b6394916 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:31 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:31.793102294Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=325caa5c-66ec-4c60-8fdb-d71f4fac4d36 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 24 19:54:31 ingress-addon-legacy-989906 crio[893]: time="2023-10-24 19:54:31.793383917Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=325caa5c-66ec-4c60-8fdb-d71f4fac4d36 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	df4fd98981097       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   7c2a71b976099       storage-provisioner
	a9fb2d6cb9cec       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   0fa717c044f25       coredns-66bff467f8-s684d
	f9a74f40b715c       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                7 minutes ago       Running             kindnet-cni               0                   87d94f7ba6bde       kindnet-qsxdg
	efa6e1f60f591       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  7 minutes ago       Running             kube-proxy                0                   6c1d10729efe6       kube-proxy-tcvng
	c8cf3612021c7       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   5d58da357b1e1       kube-scheduler-ingress-addon-legacy-989906
	d5cc6c70a928b       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   b0f34ba30add0       kube-controller-manager-ingress-addon-legacy-989906
	a247662fef54b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   f179865bc9f7d       kube-apiserver-ingress-addon-legacy-989906
	e97cc2b2bd3b1       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   4afc01bd40340       etcd-ingress-addon-legacy-989906
	
	* 
	* ==> coredns [a9fb2d6cb9cec205962416251374d92cd9e7503a773f0ca4e5c223b9b6b4baae] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:37469 - 2648 "HINFO IN 7875511511347486053.9022443650232970071. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037457384s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-989906
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-989906
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=ingress-addon-legacy-989906
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_46_18_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:46:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-989906
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 19:54:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:51:51 +0000   Tue, 24 Oct 2023 19:46:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-989906
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 41fcfbfa5a5e42e9a423869674535624
	  System UUID:                e7c0f77c-5c5a-4fee-892c-6f6289d58eb2
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-twz9h                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-wt5cm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-zvwf7              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-s684d                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m
	  kube-system                 etcd-ingress-addon-legacy-989906                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kindnet-qsxdg                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m59s
	  kube-system                 kube-apiserver-ingress-addon-legacy-989906             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-989906    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-tcvng                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-scheduler-ingress-addon-legacy-989906             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m12s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m26s (x5 over 8m26s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m26s (x5 over 8m26s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x5 over 8m26s)  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m12s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m59s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m52s                  kubelet     Node ingress-addon-legacy-989906 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001163] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001044] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000080c1e564
	[  +0.001072] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +0.003112] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001176] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000010398763
	[  +0.001113] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +3.176984] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000953f0312
	[  +0.001131] FS-Cache: O-key=[8] '39643b0000000000'
	[  +0.000732] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c4f274aa
	[  +0.001081] FS-Cache: N-key=[8] '39643b0000000000'
	[  +0.310132] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a06fabf2
	[  +0.001138] FS-Cache: O-key=[8] '3f643b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=000000004c0f819e
	[  +0.001053] FS-Cache: N-key=[8] '3f643b0000000000'
	
	* 
	* ==> etcd [e97cc2b2bd3b113a0cfd0a070341603a6347a43b870add9a7da2c111fda4270c] <==
	* raft2023/10/24 19:46:08 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/24 19:46:08 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/24 19:46:08 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/24 19:46:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:46:09.214165 W | auth: simple token is not cryptographically signed
	2023-10-24 19:46:09.305775 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-24 19:46:09.363811 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-24 19:46:09.394268 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-24 19:46:09.543176 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 19:46:09.558003 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-24 19:46:09.582091 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/24 19:46:09 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/24 19:46:09 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-24 19:46:09.653879 I | etcdserver: published {Name:ingress-addon-legacy-989906 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-24 19:46:09.666196 I | embed: ready to serve client requests
	2023-10-24 19:46:09.666296 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-24 19:46:09.691960 I | embed: ready to serve client requests
	2023-10-24 19:46:09.693305 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 19:46:09.693429 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-24 19:46:09.693489 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-24 19:46:10.262284 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  19:54:32 up  9:37,  0 users,  load average: 0.13, 0.31, 0.70
	Linux ingress-addon-legacy-989906 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f9a74f40b715c08a3af5ccec7f1a355bb4413eaf9f8d665231d959337a8c2093] <==
	* I1024 19:52:26.463183       1 main.go:227] handling current node
	I1024 19:52:36.466333       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:36.466367       1 main.go:227] handling current node
	I1024 19:52:46.469818       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:46.469846       1 main.go:227] handling current node
	I1024 19:52:56.476975       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:52:56.477003       1 main.go:227] handling current node
	I1024 19:53:06.488814       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:06.488846       1 main.go:227] handling current node
	I1024 19:53:16.495975       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:16.496006       1 main.go:227] handling current node
	I1024 19:53:26.499021       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:26.499048       1 main.go:227] handling current node
	I1024 19:53:36.502658       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:36.502690       1 main.go:227] handling current node
	I1024 19:53:46.506636       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:46.506665       1 main.go:227] handling current node
	I1024 19:53:56.518143       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:53:56.518172       1 main.go:227] handling current node
	I1024 19:54:06.523322       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:54:06.523353       1 main.go:227] handling current node
	I1024 19:54:16.527222       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:54:16.527252       1 main.go:227] handling current node
	I1024 19:54:26.531630       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1024 19:54:26.531662       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a247662fef54b5d20bc798cd13a283fbf75f727c692686b1a65ad9a06104b756] <==
	* I1024 19:46:14.196764       1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
	I1024 19:46:14.196779       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	E1024 19:46:14.251145       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1024 19:46:14.261195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1024 19:46:14.261293       1 cache.go:39] Caches are synced for autoregister controller
	I1024 19:46:14.261607       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1024 19:46:14.261683       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1024 19:46:14.265932       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 19:46:15.070712       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1024 19:46:15.070742       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1024 19:46:15.091510       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1024 19:46:15.096417       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:46:15.096445       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1024 19:46:15.542121       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:46:15.590696       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1024 19:46:15.710627       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1024 19:46:15.711772       1 controller.go:609] quota admission added evaluator for: endpoints
	I1024 19:46:15.715202       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 19:46:16.512102       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1024 19:46:17.377027       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1024 19:46:17.484288       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1024 19:46:20.759281       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:46:32.696926       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1024 19:46:32.916294       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1024 19:46:57.240634       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [d5cc6c70a928b9ec522e10e51ad4dda1729336c1e6a9cee7a3bfa93eb55906d9] <==
	* I1024 19:46:32.686669       1 shared_informer.go:230] Caches are synced for GC 
	I1024 19:46:32.701240       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1024 19:46:32.716036       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"70777e8d-c2c0-440e-8b79-0e7a347c3cae", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1024 19:46:32.716190       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1024 19:46:32.721861       1 shared_informer.go:230] Caches are synced for PVC protection 
	I1024 19:46:32.735018       1 shared_informer.go:230] Caches are synced for expand 
	I1024 19:46:32.735938       1 shared_informer.go:230] Caches are synced for attach detach 
	I1024 19:46:32.758924       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"84bb21cf-905f-4df0-86c7-c55e7e5e2f57", APIVersion:"apps/v1", ResourceVersion:"334", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-s684d
	I1024 19:46:32.907819       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1024 19:46:32.949984       1 shared_informer.go:230] Caches are synced for stateful set 
	I1024 19:46:32.986162       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:46:32.989378       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:46:32.989485       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1024 19:46:33.035017       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1024 19:46:33.135050       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"7cc392f0-9361-4969-bc08-8573abe57d85", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-tcvng
	I1024 19:46:33.153247       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"eeb1d1c4-136d-48b7-b3b2-e207f619da49", APIVersion:"apps/v1", ResourceVersion:"211", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-qsxdg
	E1024 19:46:33.262530       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"eeb1d1c4-136d-48b7-b3b2-e207f619da49", ResourceVersion:"211", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63833773577, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40017d93e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40017d9400)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017d9420), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40017d9480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017d94a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017d94e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40017ce8c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40017e4bd8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000596cb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40005c6cb0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40017e4c20)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1024 19:46:33.339802       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1024 19:46:33.339845       1 shared_informer.go:230] Caches are synced for resource quota 
	I1024 19:46:42.652232       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1024 19:46:57.228967       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"62537c9b-dc7b-49f6-8e82-3eb3eba1caee", APIVersion:"apps/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1024 19:46:57.245226       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"af93d955-2d5b-44ae-977a-3853f317263f", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-zvwf7
	I1024 19:46:57.286170       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"879063ad-d05c-4ead-a561-3e71067b211e", APIVersion:"batch/v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-twz9h
	I1024 19:46:57.336503       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"402c236a-0099-413d-b33c-f34ba3ac1468", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wt5cm
	
	* 
	* ==> kube-proxy [efa6e1f60f591b7123f48b59f9a6a8ab192fa3e090606094a68d65f5f7fab865] <==
	* W1024 19:46:33.720766       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1024 19:46:33.732515       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1024 19:46:33.732641       1 server_others.go:186] Using iptables Proxier.
	I1024 19:46:33.733028       1 server.go:583] Version: v1.18.20
	I1024 19:46:33.734711       1 config.go:315] Starting service config controller
	I1024 19:46:33.734799       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1024 19:46:33.734877       1 config.go:133] Starting endpoints config controller
	I1024 19:46:33.734931       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1024 19:46:33.835325       1 shared_informer.go:230] Caches are synced for service config 
	I1024 19:46:33.837171       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [c8cf3612021c7fef779b711b930241776408303c48fb9e0d242b5b964a19c69c] <==
	* I1024 19:46:14.270058       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:46:14.270150       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1024 19:46:14.272637       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1024 19:46:14.274064       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:46:14.274084       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1024 19:46:14.274103       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1024 19:46:14.279644       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:46:14.279819       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:46:14.279928       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:46:14.280029       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:46:14.280132       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:46:14.280229       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:46:14.280343       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:46:14.280443       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:46:14.282185       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:46:14.282342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:46:14.282530       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:46:14.282696       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:46:15.137942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:46:15.166096       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:46:15.177169       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:46:15.278302       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 19:46:15.298843       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1024 19:46:18.274194       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1024 19:46:33.010923       1 factory.go:503] pod: kube-system/coredns-66bff467f8-s684d is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Oct 24 19:53:42 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:42.793541    1622 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:53:42 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:42.793581    1622 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:53:42 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:42.793610    1622 pod_workers.go:191] Error syncing pod 6e98eba2-8de5-4623-8f15-a76730a71f02 ("kube-ingress-dns-minikube_kube-system(6e98eba2-8de5-4623-8f15-a76730a71f02)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:53:49 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:49.793927    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:53:53 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:53.072827    1622 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:53:53 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:53.072905    1622 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:53:53 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:53.072963    1622 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Oct 24 19:53:53 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:53.072995    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Oct 24 19:53:56 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:56.793543    1622 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:53:56 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:56.793579    1622 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:53:56 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:56.793616    1622 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:53:56 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:53:56.793646    1622 pod_workers.go:191] Error syncing pod 6e98eba2-8de5-4623-8f15-a76730a71f02 ("kube-ingress-dns-minikube_kube-system(6e98eba2-8de5-4623-8f15-a76730a71f02)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:54:04 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:04.794069    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:54:07 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:07.793583    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:54:10 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:10.793806    1622 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:10 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:10.793850    1622 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:10 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:10.793894    1622 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:10 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:10.793924    1622 pod_workers.go:191] Error syncing pod 6e98eba2-8de5-4623-8f15-a76730a71f02 ("kube-ingress-dns-minikube_kube-system(6e98eba2-8de5-4623-8f15-a76730a71f02)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:54:19 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:19.793930    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:54:20 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:20.794586    1622 pod_workers.go:191] Error syncing pod 8e98f46f-59e2-4568-94f8-5fc5e8871dfb ("ingress-nginx-admission-patch-wt5cm_ingress-nginx(8e98f46f-59e2-4568-94f8-5fc5e8871dfb)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Oct 24 19:54:25 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:25.793411    1622 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:25 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:25.793459    1622 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:25 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:25.793505    1622 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 24 19:54:25 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:25.793539    1622 pod_workers.go:191] Error syncing pod 6e98eba2-8de5-4623-8f15-a76730a71f02 ("kube-ingress-dns-minikube_kube-system(6e98eba2-8de5-4623-8f15-a76730a71f02)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 24 19:54:31 ingress-addon-legacy-989906 kubelet[1622]: E1024 19:54:31.793768    1622 pod_workers.go:191] Error syncing pod de4a5d2b-3f85-4133-a21d-4a7c93120d83 ("ingress-nginx-admission-create-twz9h_ingress-nginx(de4a5d2b-3f85-4133-a21d-4a7c93120d83)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [df4fd989810972fead0fe8c58d47837f7988fc6412c45fd14a00c36baf2249b3] <==
	* I1024 19:46:48.077897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:46:48.092104       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:46:48.092180       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:46:48.098967       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:46:48.099153       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825!
	I1024 19:46:48.100140       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c0998217-79cc-42eb-a003-eb345b5b1881", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825 became leader
	I1024 19:46:48.200117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-989906_1cd0a73d-5d54-4145-a2f5-3a5e6f750825!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-989906 -n ingress-addon-legacy-989906
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-989906 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7 kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7 kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7 kube-ingress-dns-minikube: exit status 1 (86.35302ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-twz9h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wt5cm" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-zvwf7" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-989906 describe pod ingress-nginx-admission-create-twz9h ingress-nginx-admission-patch-wt5cm ingress-nginx-controller-7fcf777cb7-zvwf7 kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- sh -c "ping -c 1 192.168.58.1": exit status 1 (247.974043ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-c622k): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- sh -c "ping -c 1 192.168.58.1": exit status 1 (250.502811ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-wldjb): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-773966
helpers_test.go:235: (dbg) docker inspect multinode-773966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30",
	        "Created": "2023-10-24T20:01:06.618638711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1181507,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T20:01:06.944335498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5b0caed01db498fc255865f87f2d678d2b2e04ba0f7d056894d23da26cbc249a",
	        "ResolvConfPath": "/var/lib/docker/containers/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/hostname",
	        "HostsPath": "/var/lib/docker/containers/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/hosts",
	        "LogPath": "/var/lib/docker/containers/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30-json.log",
	        "Name": "/multinode-773966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-773966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-773966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c0659c828d6b86fd3108c6f8daba03200400c123189b65ce493ce7803558770-init/diff:/var/lib/docker/overlay2/ab7e622cf253e7484ae8d7af3c5bb3ba83f211c878ee7a8c069db30bbba78b6c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c0659c828d6b86fd3108c6f8daba03200400c123189b65ce493ce7803558770/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c0659c828d6b86fd3108c6f8daba03200400c123189b65ce493ce7803558770/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c0659c828d6b86fd3108c6f8daba03200400c123189b65ce493ce7803558770/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-773966",
	                "Source": "/var/lib/docker/volumes/multinode-773966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-773966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-773966",
	                "name.minikube.sigs.k8s.io": "multinode-773966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb1bb38b01f534339bfd136b21c3f8443cd2c7ead9efc64c51812dae9bdf2452",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34285"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34284"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34281"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34283"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34282"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cb1bb38b01f5",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-773966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "94e7e8f6e06d",
	                        "multinode-773966"
	                    ],
	                    "NetworkID": "52df26ec37c4a18db50384b4a16d69599a4f72352492f3179f7d2c7c04aa4f66",
	                    "EndpointID": "fa6970f1d80a6c192bffe7b482829aa75086b43ce38874d89ea515ee62a16057",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-773966 -n multinode-773966
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-773966 logs -n 25: (1.616981911s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-035191                           | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-035191 ssh -- ls                    | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-028799                           | mount-start-1-028799 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-035191 ssh -- ls                    | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-035191                           | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| start   | -p mount-start-2-035191                           | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	| ssh     | mount-start-2-035191 ssh -- ls                    | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-035191                           | mount-start-2-035191 | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC | 24 Oct 23 20:01 UTC |
	| delete  | -p mount-start-1-028799                           | mount-start-1-028799 | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:01 UTC |
	| start   | -p multinode-773966                               | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:01 UTC | 24 Oct 23 20:03 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- apply -f                   | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- rollout                    | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- get pods -o                | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- get pods -o                | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-c622k --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-wldjb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-c622k --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-wldjb --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-c622k -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-wldjb -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- get pods -o                | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-c622k                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | busybox-5bc68d56bd-c622k -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC | 24 Oct 23 20:03 UTC |
	|         | busybox-5bc68d56bd-wldjb                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-773966 -- exec                       | multinode-773966     | jenkins | v1.31.2 | 24 Oct 23 20:03 UTC |                     |
	|         | busybox-5bc68d56bd-wldjb -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 20:01:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 20:01:01.170375 1181050 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:01:01.170613 1181050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:01:01.170645 1181050 out.go:309] Setting ErrFile to fd 2...
	I1024 20:01:01.170668 1181050 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:01:01.171006 1181050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:01:01.171609 1181050 out.go:303] Setting JSON to false
	I1024 20:01:01.172808 1181050 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35011,"bootTime":1698142651,"procs":379,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 20:01:01.172931 1181050 start.go:138] virtualization:  
	I1024 20:01:01.175730 1181050 out.go:177] * [multinode-773966] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 20:01:01.178578 1181050 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:01:01.178850 1181050 notify.go:220] Checking for updates...
	I1024 20:01:01.183445 1181050 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:01:01.185714 1181050 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:01:01.187593 1181050 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 20:01:01.189691 1181050 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 20:01:01.191757 1181050 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:01:01.193997 1181050 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:01:01.220948 1181050 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 20:01:01.221075 1181050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:01:01.309093 1181050 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 20:01:01.298654353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:01:01.309205 1181050 docker.go:295] overlay module found
	I1024 20:01:01.312054 1181050 out.go:177] * Using the docker driver based on user configuration
	I1024 20:01:01.314795 1181050 start.go:298] selected driver: docker
	I1024 20:01:01.314814 1181050 start.go:902] validating driver "docker" against <nil>
	I1024 20:01:01.314830 1181050 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:01:01.315498 1181050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:01:01.382037 1181050 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 20:01:01.372500161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:01:01.382206 1181050 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 20:01:01.382444 1181050 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 20:01:01.384689 1181050 out.go:177] * Using Docker driver with root privileges
	I1024 20:01:01.386855 1181050 cni.go:84] Creating CNI manager for ""
	I1024 20:01:01.386879 1181050 cni.go:136] 0 nodes found, recommending kindnet
	I1024 20:01:01.386890 1181050 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 20:01:01.386907 1181050 start_flags.go:323] config:
	{Name:multinode-773966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:01:01.390769 1181050 out.go:177] * Starting control plane node multinode-773966 in cluster multinode-773966
	I1024 20:01:01.392791 1181050 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 20:01:01.395033 1181050 out.go:177] * Pulling base image ...
	I1024 20:01:01.397262 1181050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:01:01.397317 1181050 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1024 20:01:01.397331 1181050 cache.go:57] Caching tarball of preloaded images
	I1024 20:01:01.397350 1181050 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 20:01:01.397411 1181050 preload.go:174] Found /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1024 20:01:01.397421 1181050 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 20:01:01.397842 1181050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json ...
	I1024 20:01:01.397880 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json: {Name:mk9a0538333493340ca569ca03ea216f90d1cf33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:01.415611 1181050 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 20:01:01.415634 1181050 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 20:01:01.415660 1181050 cache.go:195] Successfully downloaded all kic artifacts
	I1024 20:01:01.415728 1181050 start.go:365] acquiring machines lock for multinode-773966: {Name:mk33902e3abe327881c6fa01932d82a00030d203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:01:01.415840 1181050 start.go:369] acquired machines lock for "multinode-773966" in 93.678µs
	I1024 20:01:01.415865 1181050 start.go:93] Provisioning new machine with config: &{Name:multinode-773966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:01:01.415943 1181050 start.go:125] createHost starting for "" (driver="docker")
	I1024 20:01:01.418485 1181050 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 20:01:01.418844 1181050 start.go:159] libmachine.API.Create for "multinode-773966" (driver="docker")
	I1024 20:01:01.418875 1181050 client.go:168] LocalClient.Create starting
	I1024 20:01:01.418953 1181050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 20:01:01.418989 1181050 main.go:141] libmachine: Decoding PEM data...
	I1024 20:01:01.419014 1181050 main.go:141] libmachine: Parsing certificate...
	I1024 20:01:01.419070 1181050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 20:01:01.419093 1181050 main.go:141] libmachine: Decoding PEM data...
	I1024 20:01:01.419111 1181050 main.go:141] libmachine: Parsing certificate...
	I1024 20:01:01.419487 1181050 cli_runner.go:164] Run: docker network inspect multinode-773966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 20:01:01.437814 1181050 cli_runner.go:211] docker network inspect multinode-773966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 20:01:01.437893 1181050 network_create.go:281] running [docker network inspect multinode-773966] to gather additional debugging logs...
	I1024 20:01:01.437914 1181050 cli_runner.go:164] Run: docker network inspect multinode-773966
	W1024 20:01:01.456167 1181050 cli_runner.go:211] docker network inspect multinode-773966 returned with exit code 1
	I1024 20:01:01.456207 1181050 network_create.go:284] error running [docker network inspect multinode-773966]: docker network inspect multinode-773966: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-773966 not found
	I1024 20:01:01.456221 1181050 network_create.go:286] output of [docker network inspect multinode-773966]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-773966 not found
	
	** /stderr **
	I1024 20:01:01.456329 1181050 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:01:01.475506 1181050 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e280ec74d15 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:2f:b4:6a} reservation:<nil>}
	I1024 20:01:01.475865 1181050 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025ac040}
	I1024 20:01:01.475897 1181050 network_create.go:124] attempt to create docker network multinode-773966 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1024 20:01:01.475981 1181050 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-773966 multinode-773966
	I1024 20:01:01.550714 1181050 network_create.go:108] docker network multinode-773966 192.168.58.0/24 created
	I1024 20:01:01.550748 1181050 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-773966" container
	I1024 20:01:01.550835 1181050 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 20:01:01.568570 1181050 cli_runner.go:164] Run: docker volume create multinode-773966 --label name.minikube.sigs.k8s.io=multinode-773966 --label created_by.minikube.sigs.k8s.io=true
	I1024 20:01:01.588032 1181050 oci.go:103] Successfully created a docker volume multinode-773966
	I1024 20:01:01.588135 1181050 cli_runner.go:164] Run: docker run --rm --name multinode-773966-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-773966 --entrypoint /usr/bin/test -v multinode-773966:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 20:01:02.200883 1181050 oci.go:107] Successfully prepared a docker volume multinode-773966
	I1024 20:01:02.200938 1181050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:01:02.200958 1181050 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 20:01:02.201039 1181050 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-773966:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 20:01:06.520452 1181050 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-773966:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.319372757s)
	I1024 20:01:06.520496 1181050 kic.go:200] duration metric: took 4.319535 seconds to extract preloaded images to volume
	W1024 20:01:06.520640 1181050 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 20:01:06.520745 1181050 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 20:01:06.601597 1181050 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-773966 --name multinode-773966 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-773966 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-773966 --network multinode-773966 --ip 192.168.58.2 --volume multinode-773966:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 20:01:06.954347 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Running}}
	I1024 20:01:06.975036 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:07.005094 1181050 cli_runner.go:164] Run: docker exec multinode-773966 stat /var/lib/dpkg/alternatives/iptables
	I1024 20:01:07.057032 1181050 oci.go:144] the created container "multinode-773966" has a running status.
	I1024 20:01:07.057057 1181050 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa...
	I1024 20:01:07.599228 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 20:01:07.599325 1181050 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 20:01:07.631802 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:07.667404 1181050 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 20:01:07.667423 1181050 kic_runner.go:114] Args: [docker exec --privileged multinode-773966 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 20:01:07.773460 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:07.800796 1181050 machine.go:88] provisioning docker machine ...
	I1024 20:01:07.800843 1181050 ubuntu.go:169] provisioning hostname "multinode-773966"
	I1024 20:01:07.800920 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:07.830358 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:07.830804 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34285 <nil> <nil>}
	I1024 20:01:07.830817 1181050 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773966 && echo "multinode-773966" | sudo tee /etc/hostname
	I1024 20:01:08.036847 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773966
	
	I1024 20:01:08.036928 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:08.072519 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:08.072958 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34285 <nil> <nil>}
	I1024 20:01:08.072976 1181050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773966/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:01:08.218810 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:01:08.218853 1181050 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:01:08.218875 1181050 ubuntu.go:177] setting up certificates
	I1024 20:01:08.218884 1181050 provision.go:83] configureAuth start
	I1024 20:01:08.218963 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966
	I1024 20:01:08.238794 1181050 provision.go:138] copyHostCerts
	I1024 20:01:08.238838 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:01:08.238867 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:01:08.238878 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:01:08.238965 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:01:08.239043 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:01:08.239064 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:01:08.239072 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:01:08.239101 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:01:08.239148 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:01:08.239168 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:01:08.239173 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:01:08.239197 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:01:08.239266 1181050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.multinode-773966 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-773966]
	I1024 20:01:08.453633 1181050 provision.go:172] copyRemoteCerts
	I1024 20:01:08.453699 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:01:08.453759 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:08.474783 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:08.576736 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 20:01:08.576795 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:01:08.605941 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 20:01:08.606000 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1024 20:01:08.634196 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 20:01:08.634257 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:01:08.662110 1181050 provision.go:86] duration metric: configureAuth took 443.209336ms
	I1024 20:01:08.662140 1181050 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:01:08.662318 1181050 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:01:08.662438 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:08.680790 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:01:08.681218 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34285 <nil> <nil>}
	I1024 20:01:08.681235 1181050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:01:08.931973 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:01:08.931994 1181050 machine.go:91] provisioned docker machine in 1.131165807s
	I1024 20:01:08.932004 1181050 client.go:171] LocalClient.Create took 7.513122111s
	I1024 20:01:08.932016 1181050 start.go:167] duration metric: libmachine.API.Create for "multinode-773966" took 7.513183116s
	I1024 20:01:08.932024 1181050 start.go:300] post-start starting for "multinode-773966" (driver="docker")
	I1024 20:01:08.932033 1181050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:01:08.932109 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:01:08.932152 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:08.953206 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:09.054283 1181050 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:01:09.058631 1181050 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1024 20:01:09.058650 1181050 command_runner.go:130] > NAME="Ubuntu"
	I1024 20:01:09.058658 1181050 command_runner.go:130] > VERSION_ID="22.04"
	I1024 20:01:09.058664 1181050 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1024 20:01:09.058670 1181050 command_runner.go:130] > VERSION_CODENAME=jammy
	I1024 20:01:09.058675 1181050 command_runner.go:130] > ID=ubuntu
	I1024 20:01:09.058680 1181050 command_runner.go:130] > ID_LIKE=debian
	I1024 20:01:09.058686 1181050 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1024 20:01:09.058692 1181050 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1024 20:01:09.058700 1181050 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1024 20:01:09.058711 1181050 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1024 20:01:09.058721 1181050 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1024 20:01:09.058786 1181050 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:01:09.058818 1181050 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:01:09.058833 1181050 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:01:09.058845 1181050 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 20:01:09.058856 1181050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:01:09.058919 1181050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:01:09.059004 1181050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:01:09.059014 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /etc/ssl/certs/11176342.pem
	I1024 20:01:09.059111 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:01:09.069395 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:01:09.098879 1181050 start.go:303] post-start completed in 166.839815ms
	I1024 20:01:09.099260 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966
	I1024 20:01:09.119004 1181050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json ...
	I1024 20:01:09.119278 1181050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:01:09.119332 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:09.137659 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:09.231323 1181050 command_runner.go:130] > 11%!
	(MISSING)I1024 20:01:09.231819 1181050 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:01:09.237147 1181050 command_runner.go:130] > 174G
	I1024 20:01:09.237538 1181050 start.go:128] duration metric: createHost completed in 7.821583198s
	I1024 20:01:09.237555 1181050 start.go:83] releasing machines lock for "multinode-773966", held for 7.821706225s
	I1024 20:01:09.237631 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966
	I1024 20:01:09.255471 1181050 ssh_runner.go:195] Run: cat /version.json
	I1024 20:01:09.255528 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:09.255817 1181050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:01:09.255882 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:09.281824 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:09.282927 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:09.503074 1181050 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 20:01:09.503171 1181050 command_runner.go:130] > {"iso_version": "v1.31.0-1697471113-17434", "kicbase_version": "v0.0.40-1698055645-17423", "minikube_version": "v1.31.2", "commit": "585245745aba695f9444ad633713942a6eacd882"}
	I1024 20:01:09.503324 1181050 ssh_runner.go:195] Run: systemctl --version
	I1024 20:01:09.508445 1181050 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1024 20:01:09.508536 1181050 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1024 20:01:09.508613 1181050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:01:09.654467 1181050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 20:01:09.659950 1181050 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1024 20:01:09.659980 1181050 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1024 20:01:09.659993 1181050 command_runner.go:130] > Device: 3ah/58d	Inode: 1569408     Links: 1
	I1024 20:01:09.660001 1181050 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:01:09.660008 1181050 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1024 20:01:09.660014 1181050 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1024 20:01:09.660024 1181050 command_runner.go:130] > Change: 2023-10-24 19:23:59.409159037 +0000
	I1024 20:01:09.660039 1181050 command_runner.go:130] >  Birth: 2023-10-24 19:23:59.409159037 +0000
	I1024 20:01:09.660107 1181050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:01:09.682775 1181050 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 20:01:09.682861 1181050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:01:09.722267 1181050 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1024 20:01:09.722297 1181050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 20:01:09.722305 1181050 start.go:472] detecting cgroup driver to use...
	I1024 20:01:09.722335 1181050 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 20:01:09.722392 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:01:09.740020 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:01:09.753401 1181050 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:01:09.753465 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:01:09.769228 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:01:09.786453 1181050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:01:09.878763 1181050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:01:09.989722 1181050 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 20:01:09.989816 1181050 docker.go:214] disabling docker service ...
	I1024 20:01:09.989881 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:01:10.013537 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:01:10.030946 1181050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:01:10.124019 1181050 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 20:01:10.124123 1181050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:01:10.225416 1181050 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 20:01:10.225502 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:01:10.239067 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:01:10.258155 1181050 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 20:01:10.259467 1181050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:01:10.259554 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:01:10.271228 1181050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:01:10.271299 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:01:10.283073 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:01:10.295198 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:01:10.307289 1181050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:01:10.319058 1181050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:01:10.328195 1181050 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 20:01:10.329228 1181050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:01:10.339635 1181050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:01:10.433824 1181050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:01:10.558839 1181050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:01:10.558951 1181050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:01:10.563552 1181050 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 20:01:10.563577 1181050 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 20:01:10.563585 1181050 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1024 20:01:10.563593 1181050 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:01:10.563628 1181050 command_runner.go:130] > Access: 2023-10-24 20:01:10.543260045 +0000
	I1024 20:01:10.563651 1181050 command_runner.go:130] > Modify: 2023-10-24 20:01:10.543260045 +0000
	I1024 20:01:10.563662 1181050 command_runner.go:130] > Change: 2023-10-24 20:01:10.543260045 +0000
	I1024 20:01:10.563672 1181050 command_runner.go:130] >  Birth: -
	I1024 20:01:10.563923 1181050 start.go:540] Will wait 60s for crictl version
	I1024 20:01:10.564001 1181050 ssh_runner.go:195] Run: which crictl
	I1024 20:01:10.568030 1181050 command_runner.go:130] > /usr/bin/crictl
	I1024 20:01:10.568474 1181050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:01:10.612767 1181050 command_runner.go:130] > Version:  0.1.0
	I1024 20:01:10.612788 1181050 command_runner.go:130] > RuntimeName:  cri-o
	I1024 20:01:10.612794 1181050 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1024 20:01:10.612801 1181050 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 20:01:10.615195 1181050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 20:01:10.615281 1181050 ssh_runner.go:195] Run: crio --version
	I1024 20:01:10.661820 1181050 command_runner.go:130] > crio version 1.24.6
	I1024 20:01:10.661921 1181050 command_runner.go:130] > Version:          1.24.6
	I1024 20:01:10.661949 1181050 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 20:01:10.661971 1181050 command_runner.go:130] > GitTreeState:     clean
	I1024 20:01:10.662003 1181050 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 20:01:10.662028 1181050 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 20:01:10.662046 1181050 command_runner.go:130] > Compiler:         gc
	I1024 20:01:10.662062 1181050 command_runner.go:130] > Platform:         linux/arm64
	I1024 20:01:10.662096 1181050 command_runner.go:130] > Linkmode:         dynamic
	I1024 20:01:10.662119 1181050 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 20:01:10.662136 1181050 command_runner.go:130] > SeccompEnabled:   true
	I1024 20:01:10.662167 1181050 command_runner.go:130] > AppArmorEnabled:  false
	I1024 20:01:10.664411 1181050 ssh_runner.go:195] Run: crio --version
	I1024 20:01:10.708745 1181050 command_runner.go:130] > crio version 1.24.6
	I1024 20:01:10.708812 1181050 command_runner.go:130] > Version:          1.24.6
	I1024 20:01:10.708834 1181050 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 20:01:10.708854 1181050 command_runner.go:130] > GitTreeState:     clean
	I1024 20:01:10.708885 1181050 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 20:01:10.708908 1181050 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 20:01:10.708930 1181050 command_runner.go:130] > Compiler:         gc
	I1024 20:01:10.708946 1181050 command_runner.go:130] > Platform:         linux/arm64
	I1024 20:01:10.708977 1181050 command_runner.go:130] > Linkmode:         dynamic
	I1024 20:01:10.709013 1181050 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 20:01:10.709031 1181050 command_runner.go:130] > SeccompEnabled:   true
	I1024 20:01:10.709062 1181050 command_runner.go:130] > AppArmorEnabled:  false
	I1024 20:01:10.713533 1181050 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 20:01:10.715463 1181050 cli_runner.go:164] Run: docker network inspect multinode-773966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:01:10.733193 1181050 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1024 20:01:10.738191 1181050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:01:10.751432 1181050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:01:10.751504 1181050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:01:10.813816 1181050 command_runner.go:130] > {
	I1024 20:01:10.813839 1181050 command_runner.go:130] >   "images": [
	I1024 20:01:10.813844 1181050 command_runner.go:130] >     {
	I1024 20:01:10.813854 1181050 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1024 20:01:10.813860 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.813867 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 20:01:10.813872 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.813881 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.813892 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 20:01:10.813906 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1024 20:01:10.813910 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.813921 1181050 command_runner.go:130] >       "size": "60867618",
	I1024 20:01:10.813926 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.813938 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.813956 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.813965 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.813970 1181050 command_runner.go:130] >     },
	I1024 20:01:10.813979 1181050 command_runner.go:130] >     {
	I1024 20:01:10.813987 1181050 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1024 20:01:10.813995 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814002 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 20:01:10.814009 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814014 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814027 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1024 20:01:10.814040 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1024 20:01:10.814047 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814054 1181050 command_runner.go:130] >       "size": "29037500",
	I1024 20:01:10.814059 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.814068 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814073 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814081 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814088 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814096 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814104 1181050 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1024 20:01:10.814112 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814119 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 20:01:10.814126 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814132 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814141 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1024 20:01:10.814153 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1024 20:01:10.814162 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814167 1181050 command_runner.go:130] >       "size": "51393451",
	I1024 20:01:10.814175 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.814181 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814189 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814194 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814201 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814206 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814217 1181050 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1024 20:01:10.814227 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814233 1181050 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 20:01:10.814241 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814247 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814259 1181050 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1024 20:01:10.814271 1181050 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1024 20:01:10.814287 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814296 1181050 command_runner.go:130] >       "size": "182203183",
	I1024 20:01:10.814301 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.814309 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.814314 1181050 command_runner.go:130] >       },
	I1024 20:01:10.814322 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814327 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814332 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814340 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814345 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814356 1181050 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1024 20:01:10.814364 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814373 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 20:01:10.814381 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814386 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814398 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1024 20:01:10.814411 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 20:01:10.814415 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814425 1181050 command_runner.go:130] >       "size": "121054158",
	I1024 20:01:10.814430 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.814439 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.814444 1181050 command_runner.go:130] >       },
	I1024 20:01:10.814452 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814457 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814465 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814469 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814476 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814484 1181050 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1024 20:01:10.814493 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814500 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 20:01:10.814509 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814514 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814527 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 20:01:10.814540 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1024 20:01:10.814548 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814553 1181050 command_runner.go:130] >       "size": "117252916",
	I1024 20:01:10.814564 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.814572 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.814577 1181050 command_runner.go:130] >       },
	I1024 20:01:10.814586 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814591 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814600 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814604 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814612 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814620 1181050 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1024 20:01:10.814628 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814634 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 20:01:10.814642 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814649 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814661 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1024 20:01:10.814674 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 20:01:10.814678 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814685 1181050 command_runner.go:130] >       "size": "69926807",
	I1024 20:01:10.814694 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.814699 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814707 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814714 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814721 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814726 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814737 1181050 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1024 20:01:10.814745 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814752 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 20:01:10.814758 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814763 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814803 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 20:01:10.814817 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1024 20:01:10.814826 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814835 1181050 command_runner.go:130] >       "size": "59188020",
	I1024 20:01:10.814841 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.814850 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.814855 1181050 command_runner.go:130] >       },
	I1024 20:01:10.814864 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.814869 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.814877 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.814882 1181050 command_runner.go:130] >     },
	I1024 20:01:10.814889 1181050 command_runner.go:130] >     {
	I1024 20:01:10.814897 1181050 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1024 20:01:10.814916 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.814926 1181050 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 20:01:10.814931 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814938 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.814947 1181050 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1024 20:01:10.814959 1181050 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1024 20:01:10.814967 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.814974 1181050 command_runner.go:130] >       "size": "520014",
	I1024 20:01:10.814982 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.814988 1181050 command_runner.go:130] >         "value": "65535"
	I1024 20:01:10.814995 1181050 command_runner.go:130] >       },
	I1024 20:01:10.815000 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.815008 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.815014 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.815022 1181050 command_runner.go:130] >     }
	I1024 20:01:10.815026 1181050 command_runner.go:130] >   ]
	I1024 20:01:10.815030 1181050 command_runner.go:130] > }
	I1024 20:01:10.817456 1181050 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:01:10.817473 1181050 crio.go:415] Images already preloaded, skipping extraction
	I1024 20:01:10.817525 1181050 ssh_runner.go:195] Run: sudo crictl images --output json
	I1024 20:01:10.853641 1181050 command_runner.go:130] > {
	I1024 20:01:10.853660 1181050 command_runner.go:130] >   "images": [
	I1024 20:01:10.853665 1181050 command_runner.go:130] >     {
	I1024 20:01:10.853678 1181050 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1024 20:01:10.853683 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.853691 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1024 20:01:10.853697 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853703 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.853718 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1024 20:01:10.853727 1181050 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1024 20:01:10.853732 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853759 1181050 command_runner.go:130] >       "size": "60867618",
	I1024 20:01:10.853765 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.853769 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.853775 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.853780 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.853787 1181050 command_runner.go:130] >     },
	I1024 20:01:10.853791 1181050 command_runner.go:130] >     {
	I1024 20:01:10.853798 1181050 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1024 20:01:10.853803 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.853817 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1024 20:01:10.853824 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853829 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.853838 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1024 20:01:10.853848 1181050 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1024 20:01:10.853852 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853860 1181050 command_runner.go:130] >       "size": "29037500",
	I1024 20:01:10.853865 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.853870 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.853875 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.853880 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.853884 1181050 command_runner.go:130] >     },
	I1024 20:01:10.853888 1181050 command_runner.go:130] >     {
	I1024 20:01:10.853896 1181050 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1024 20:01:10.853901 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.853907 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1024 20:01:10.853911 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853916 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.853925 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1024 20:01:10.853935 1181050 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1024 20:01:10.853940 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.853945 1181050 command_runner.go:130] >       "size": "51393451",
	I1024 20:01:10.853954 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.853959 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.853964 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.853971 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.853975 1181050 command_runner.go:130] >     },
	I1024 20:01:10.853979 1181050 command_runner.go:130] >     {
	I1024 20:01:10.853986 1181050 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1024 20:01:10.853991 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.853997 1181050 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1024 20:01:10.854001 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854006 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854014 1181050 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1024 20:01:10.854023 1181050 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1024 20:01:10.854032 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854037 1181050 command_runner.go:130] >       "size": "182203183",
	I1024 20:01:10.854044 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.854049 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.854053 1181050 command_runner.go:130] >       },
	I1024 20:01:10.854058 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854062 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854067 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854071 1181050 command_runner.go:130] >     },
	I1024 20:01:10.854075 1181050 command_runner.go:130] >     {
	I1024 20:01:10.854083 1181050 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1024 20:01:10.854087 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.854094 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1024 20:01:10.854098 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854103 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854112 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1024 20:01:10.854121 1181050 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1024 20:01:10.854125 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854130 1181050 command_runner.go:130] >       "size": "121054158",
	I1024 20:01:10.854135 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.854141 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.854145 1181050 command_runner.go:130] >       },
	I1024 20:01:10.854150 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854154 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854160 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854164 1181050 command_runner.go:130] >     },
	I1024 20:01:10.854168 1181050 command_runner.go:130] >     {
	I1024 20:01:10.854175 1181050 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1024 20:01:10.854180 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.854187 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1024 20:01:10.854191 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854196 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854205 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1024 20:01:10.854215 1181050 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1024 20:01:10.854219 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854225 1181050 command_runner.go:130] >       "size": "117252916",
	I1024 20:01:10.854229 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.854234 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.854240 1181050 command_runner.go:130] >       },
	I1024 20:01:10.854245 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854250 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854255 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854259 1181050 command_runner.go:130] >     },
	I1024 20:01:10.854263 1181050 command_runner.go:130] >     {
	I1024 20:01:10.854270 1181050 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1024 20:01:10.854275 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.854281 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1024 20:01:10.854286 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854291 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854300 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1024 20:01:10.854309 1181050 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1024 20:01:10.854313 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854318 1181050 command_runner.go:130] >       "size": "69926807",
	I1024 20:01:10.854323 1181050 command_runner.go:130] >       "uid": null,
	I1024 20:01:10.854327 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854332 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854338 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854343 1181050 command_runner.go:130] >     },
	I1024 20:01:10.854347 1181050 command_runner.go:130] >     {
	I1024 20:01:10.854355 1181050 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1024 20:01:10.854360 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.854366 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1024 20:01:10.854370 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854375 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854425 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1024 20:01:10.854436 1181050 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1024 20:01:10.854440 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854445 1181050 command_runner.go:130] >       "size": "59188020",
	I1024 20:01:10.854450 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.854454 1181050 command_runner.go:130] >         "value": "0"
	I1024 20:01:10.854459 1181050 command_runner.go:130] >       },
	I1024 20:01:10.854464 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854469 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854473 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854480 1181050 command_runner.go:130] >     },
	I1024 20:01:10.854484 1181050 command_runner.go:130] >     {
	I1024 20:01:10.854492 1181050 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1024 20:01:10.854497 1181050 command_runner.go:130] >       "repoTags": [
	I1024 20:01:10.854502 1181050 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1024 20:01:10.854506 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854511 1181050 command_runner.go:130] >       "repoDigests": [
	I1024 20:01:10.854520 1181050 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1024 20:01:10.854530 1181050 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1024 20:01:10.854534 1181050 command_runner.go:130] >       ],
	I1024 20:01:10.854539 1181050 command_runner.go:130] >       "size": "520014",
	I1024 20:01:10.854543 1181050 command_runner.go:130] >       "uid": {
	I1024 20:01:10.854548 1181050 command_runner.go:130] >         "value": "65535"
	I1024 20:01:10.854552 1181050 command_runner.go:130] >       },
	I1024 20:01:10.854557 1181050 command_runner.go:130] >       "username": "",
	I1024 20:01:10.854562 1181050 command_runner.go:130] >       "spec": null,
	I1024 20:01:10.854567 1181050 command_runner.go:130] >       "pinned": false
	I1024 20:01:10.854571 1181050 command_runner.go:130] >     }
	I1024 20:01:10.854576 1181050 command_runner.go:130] >   ]
	I1024 20:01:10.854580 1181050 command_runner.go:130] > }
	I1024 20:01:10.857337 1181050 crio.go:496] all images are preloaded for cri-o runtime.
	I1024 20:01:10.857357 1181050 cache_images.go:84] Images are preloaded, skipping loading
	I1024 20:01:10.857431 1181050 ssh_runner.go:195] Run: crio config
	I1024 20:01:10.909583 1181050 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 20:01:10.909605 1181050 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 20:01:10.909614 1181050 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 20:01:10.909618 1181050 command_runner.go:130] > #
	I1024 20:01:10.909627 1181050 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 20:01:10.909635 1181050 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 20:01:10.909643 1181050 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 20:01:10.909653 1181050 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 20:01:10.909660 1181050 command_runner.go:130] > # reload'.
	I1024 20:01:10.909670 1181050 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 20:01:10.909682 1181050 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 20:01:10.909690 1181050 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 20:01:10.909697 1181050 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 20:01:10.909704 1181050 command_runner.go:130] > [crio]
	I1024 20:01:10.909712 1181050 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 20:01:10.909720 1181050 command_runner.go:130] > # containers images, in this directory.
	I1024 20:01:10.910144 1181050 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1024 20:01:10.910161 1181050 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 20:01:10.910323 1181050 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1024 20:01:10.910338 1181050 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 20:01:10.910346 1181050 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 20:01:10.910603 1181050 command_runner.go:130] > # storage_driver = "vfs"
	I1024 20:01:10.910618 1181050 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 20:01:10.910626 1181050 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 20:01:10.910631 1181050 command_runner.go:130] > # storage_option = [
	I1024 20:01:10.910635 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.910643 1181050 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 20:01:10.910651 1181050 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 20:01:10.910657 1181050 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 20:01:10.910664 1181050 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 20:01:10.910675 1181050 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 20:01:10.910681 1181050 command_runner.go:130] > # always happen on a node reboot
	I1024 20:01:10.910687 1181050 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 20:01:10.910694 1181050 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 20:01:10.910701 1181050 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 20:01:10.910718 1181050 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 20:01:10.910725 1181050 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 20:01:10.910734 1181050 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 20:01:10.910744 1181050 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 20:01:10.910750 1181050 command_runner.go:130] > # internal_wipe = true
	I1024 20:01:10.910756 1181050 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 20:01:10.910764 1181050 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 20:01:10.910771 1181050 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 20:01:10.910777 1181050 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 20:01:10.910786 1181050 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 20:01:10.910790 1181050 command_runner.go:130] > [crio.api]
	I1024 20:01:10.910797 1181050 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 20:01:10.910802 1181050 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 20:01:10.910810 1181050 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 20:01:10.910816 1181050 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 20:01:10.910823 1181050 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 20:01:10.910830 1181050 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 20:01:10.910835 1181050 command_runner.go:130] > # stream_port = "0"
	I1024 20:01:10.910841 1181050 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 20:01:10.910846 1181050 command_runner.go:130] > # stream_enable_tls = false
	I1024 20:01:10.910854 1181050 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 20:01:10.910859 1181050 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 20:01:10.910868 1181050 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 20:01:10.910885 1181050 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 20:01:10.910892 1181050 command_runner.go:130] > # minutes.
	I1024 20:01:10.910916 1181050 command_runner.go:130] > # stream_tls_cert = ""
	I1024 20:01:10.910924 1181050 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 20:01:10.910932 1181050 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 20:01:10.910937 1181050 command_runner.go:130] > # stream_tls_key = ""
	I1024 20:01:10.910944 1181050 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 20:01:10.910960 1181050 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 20:01:10.910969 1181050 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 20:01:10.910984 1181050 command_runner.go:130] > # stream_tls_ca = ""
	I1024 20:01:10.910993 1181050 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 20:01:10.910999 1181050 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1024 20:01:10.911007 1181050 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 20:01:10.911013 1181050 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1024 20:01:10.911058 1181050 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 20:01:10.911066 1181050 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 20:01:10.911070 1181050 command_runner.go:130] > [crio.runtime]
	I1024 20:01:10.911077 1181050 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 20:01:10.911084 1181050 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 20:01:10.911089 1181050 command_runner.go:130] > # "nofile=1024:2048"
	I1024 20:01:10.911096 1181050 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 20:01:10.911338 1181050 command_runner.go:130] > # default_ulimits = [
	I1024 20:01:10.911348 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.911356 1181050 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 20:01:10.911365 1181050 command_runner.go:130] > # no_pivot = false
	I1024 20:01:10.911372 1181050 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 20:01:10.911379 1181050 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 20:01:10.911690 1181050 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 20:01:10.911704 1181050 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 20:01:10.911710 1181050 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 20:01:10.911722 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 20:01:10.911727 1181050 command_runner.go:130] > # conmon = ""
	I1024 20:01:10.911737 1181050 command_runner.go:130] > # Cgroup setting for conmon
	I1024 20:01:10.911745 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 20:01:10.911750 1181050 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 20:01:10.911758 1181050 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 20:01:10.911764 1181050 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 20:01:10.911772 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 20:01:10.911776 1181050 command_runner.go:130] > # conmon_env = [
	I1024 20:01:10.911780 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.911787 1181050 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 20:01:10.911793 1181050 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 20:01:10.911800 1181050 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 20:01:10.911804 1181050 command_runner.go:130] > # default_env = [
	I1024 20:01:10.911811 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.911817 1181050 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 20:01:10.911822 1181050 command_runner.go:130] > # selinux = false
	I1024 20:01:10.911829 1181050 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 20:01:10.911837 1181050 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 20:01:10.911844 1181050 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 20:01:10.911851 1181050 command_runner.go:130] > # seccomp_profile = ""
	I1024 20:01:10.911858 1181050 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 20:01:10.911865 1181050 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 20:01:10.911872 1181050 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 20:01:10.911877 1181050 command_runner.go:130] > # which might increase security.
	I1024 20:01:10.911883 1181050 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1024 20:01:10.911892 1181050 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 20:01:10.911900 1181050 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 20:01:10.911907 1181050 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 20:01:10.911914 1181050 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 20:01:10.911920 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:01:10.911926 1181050 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 20:01:10.911934 1181050 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 20:01:10.911939 1181050 command_runner.go:130] > # the cgroup blockio controller.
	I1024 20:01:10.911945 1181050 command_runner.go:130] > # blockio_config_file = ""
	I1024 20:01:10.911952 1181050 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 20:01:10.911958 1181050 command_runner.go:130] > # irqbalance daemon.
	I1024 20:01:10.911964 1181050 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 20:01:10.911973 1181050 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 20:01:10.911980 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:01:10.911984 1181050 command_runner.go:130] > # rdt_config_file = ""
	I1024 20:01:10.911991 1181050 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 20:01:10.911995 1181050 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 20:01:10.912003 1181050 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 20:01:10.912007 1181050 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 20:01:10.912015 1181050 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 20:01:10.912022 1181050 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 20:01:10.912026 1181050 command_runner.go:130] > # will be added.
	I1024 20:01:10.912031 1181050 command_runner.go:130] > # default_capabilities = [
	I1024 20:01:10.912035 1181050 command_runner.go:130] > # 	"CHOWN",
	I1024 20:01:10.912040 1181050 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 20:01:10.912044 1181050 command_runner.go:130] > # 	"FSETID",
	I1024 20:01:10.912048 1181050 command_runner.go:130] > # 	"FOWNER",
	I1024 20:01:10.912052 1181050 command_runner.go:130] > # 	"SETGID",
	I1024 20:01:10.912057 1181050 command_runner.go:130] > # 	"SETUID",
	I1024 20:01:10.912061 1181050 command_runner.go:130] > # 	"SETPCAP",
	I1024 20:01:10.912068 1181050 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 20:01:10.912072 1181050 command_runner.go:130] > # 	"KILL",
	I1024 20:01:10.912076 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912085 1181050 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1024 20:01:10.912093 1181050 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1024 20:01:10.912098 1181050 command_runner.go:130] > # add_inheritable_capabilities = true
	I1024 20:01:10.912106 1181050 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 20:01:10.912112 1181050 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 20:01:10.912117 1181050 command_runner.go:130] > # default_sysctls = [
	I1024 20:01:10.912121 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912126 1181050 command_runner.go:130] > # List of devices on the host that a
	I1024 20:01:10.912134 1181050 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 20:01:10.912138 1181050 command_runner.go:130] > # allowed_devices = [
	I1024 20:01:10.912143 1181050 command_runner.go:130] > # 	"/dev/fuse",
	I1024 20:01:10.912146 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912152 1181050 command_runner.go:130] > # List of additional devices. specified as
	I1024 20:01:10.912191 1181050 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 20:01:10.912198 1181050 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 20:01:10.912206 1181050 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 20:01:10.912212 1181050 command_runner.go:130] > # additional_devices = [
	I1024 20:01:10.912216 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912222 1181050 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 20:01:10.912227 1181050 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 20:01:10.912232 1181050 command_runner.go:130] > # 	"/etc/cdi",
	I1024 20:01:10.912236 1181050 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 20:01:10.912240 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912247 1181050 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 20:01:10.912255 1181050 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 20:01:10.912259 1181050 command_runner.go:130] > # Defaults to false.
	I1024 20:01:10.912265 1181050 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 20:01:10.912273 1181050 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 20:01:10.912280 1181050 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 20:01:10.912284 1181050 command_runner.go:130] > # hooks_dir = [
	I1024 20:01:10.912289 1181050 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 20:01:10.912293 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912300 1181050 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 20:01:10.912309 1181050 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 20:01:10.912316 1181050 command_runner.go:130] > # its default mounts from the following two files:
	I1024 20:01:10.912319 1181050 command_runner.go:130] > #
	I1024 20:01:10.912327 1181050 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 20:01:10.912334 1181050 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 20:01:10.912341 1181050 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 20:01:10.912344 1181050 command_runner.go:130] > #
	I1024 20:01:10.912351 1181050 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 20:01:10.912359 1181050 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 20:01:10.912366 1181050 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 20:01:10.912372 1181050 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 20:01:10.912376 1181050 command_runner.go:130] > #
	I1024 20:01:10.912381 1181050 command_runner.go:130] > # default_mounts_file = ""
	I1024 20:01:10.912387 1181050 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 20:01:10.912395 1181050 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 20:01:10.912399 1181050 command_runner.go:130] > # pids_limit = 0
	I1024 20:01:10.912406 1181050 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 20:01:10.912413 1181050 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 20:01:10.912422 1181050 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 20:01:10.912431 1181050 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 20:01:10.912436 1181050 command_runner.go:130] > # log_size_max = -1
	I1024 20:01:10.912444 1181050 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 20:01:10.912450 1181050 command_runner.go:130] > # log_to_journald = false
	I1024 20:01:10.912458 1181050 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 20:01:10.912464 1181050 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 20:01:10.912470 1181050 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 20:01:10.912476 1181050 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 20:01:10.912482 1181050 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 20:01:10.912487 1181050 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 20:01:10.912493 1181050 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 20:01:10.912498 1181050 command_runner.go:130] > # read_only = false
	I1024 20:01:10.912505 1181050 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 20:01:10.912512 1181050 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 20:01:10.912517 1181050 command_runner.go:130] > # live configuration reload.
	I1024 20:01:10.912521 1181050 command_runner.go:130] > # log_level = "info"
	I1024 20:01:10.912528 1181050 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 20:01:10.912536 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:01:10.912549 1181050 command_runner.go:130] > # log_filter = ""
	I1024 20:01:10.912557 1181050 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 20:01:10.912564 1181050 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 20:01:10.912568 1181050 command_runner.go:130] > # separated by comma.
	I1024 20:01:10.912573 1181050 command_runner.go:130] > # uid_mappings = ""
	I1024 20:01:10.912580 1181050 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 20:01:10.912587 1181050 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 20:01:10.912592 1181050 command_runner.go:130] > # separated by comma.
	I1024 20:01:10.912596 1181050 command_runner.go:130] > # gid_mappings = ""
	I1024 20:01:10.912603 1181050 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 20:01:10.912610 1181050 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 20:01:10.912617 1181050 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 20:01:10.912622 1181050 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 20:01:10.912631 1181050 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 20:01:10.912638 1181050 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 20:01:10.912645 1181050 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 20:01:10.912650 1181050 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 20:01:10.912658 1181050 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 20:01:10.912665 1181050 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 20:01:10.912672 1181050 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 20:01:10.912677 1181050 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 20:01:10.912683 1181050 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 20:01:10.912700 1181050 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 20:01:10.912709 1181050 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 20:01:10.912716 1181050 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 20:01:10.912721 1181050 command_runner.go:130] > # drop_infra_ctr = true
	I1024 20:01:10.912728 1181050 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 20:01:10.912735 1181050 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 20:01:10.912743 1181050 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 20:01:10.912748 1181050 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 20:01:10.912755 1181050 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 20:01:10.912761 1181050 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 20:01:10.912766 1181050 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 20:01:10.912774 1181050 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 20:01:10.912778 1181050 command_runner.go:130] > # pinns_path = ""
	I1024 20:01:10.912788 1181050 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 20:01:10.912795 1181050 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 20:01:10.912803 1181050 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 20:01:10.912808 1181050 command_runner.go:130] > # default_runtime = "runc"
	I1024 20:01:10.912814 1181050 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 20:01:10.912822 1181050 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 20:01:10.912836 1181050 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 20:01:10.912841 1181050 command_runner.go:130] > # creation as a file is not desired either.
	I1024 20:01:10.912851 1181050 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 20:01:10.912857 1181050 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 20:01:10.912863 1181050 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 20:01:10.912866 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.912874 1181050 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 20:01:10.912881 1181050 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 20:01:10.912889 1181050 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 20:01:10.912896 1181050 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 20:01:10.912900 1181050 command_runner.go:130] > #
	I1024 20:01:10.912905 1181050 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 20:01:10.912912 1181050 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 20:01:10.912917 1181050 command_runner.go:130] > #  runtime_type = "oci"
	I1024 20:01:10.912923 1181050 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 20:01:10.912928 1181050 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 20:01:10.912933 1181050 command_runner.go:130] > #  allowed_annotations = []
	I1024 20:01:10.912937 1181050 command_runner.go:130] > # Where:
	I1024 20:01:10.912944 1181050 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 20:01:10.912953 1181050 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 20:01:10.912960 1181050 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 20:01:10.912968 1181050 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 20:01:10.912972 1181050 command_runner.go:130] > #   in $PATH.
	I1024 20:01:10.912979 1181050 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 20:01:10.912985 1181050 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 20:01:10.912992 1181050 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 20:01:10.912996 1181050 command_runner.go:130] > #   state.
	I1024 20:01:10.913004 1181050 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 20:01:10.913011 1181050 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 20:01:10.913018 1181050 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 20:01:10.913027 1181050 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 20:01:10.913036 1181050 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 20:01:10.913043 1181050 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 20:01:10.913049 1181050 command_runner.go:130] > #   The currently recognized values are:
	I1024 20:01:10.913056 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 20:01:10.913065 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 20:01:10.913072 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 20:01:10.913079 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 20:01:10.913087 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 20:01:10.913095 1181050 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 20:01:10.913102 1181050 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 20:01:10.913110 1181050 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 20:01:10.913115 1181050 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 20:01:10.913120 1181050 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 20:01:10.913126 1181050 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1024 20:01:10.913131 1181050 command_runner.go:130] > runtime_type = "oci"
	I1024 20:01:10.913136 1181050 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 20:01:10.913140 1181050 command_runner.go:130] > runtime_config_path = ""
	I1024 20:01:10.913146 1181050 command_runner.go:130] > monitor_path = ""
	I1024 20:01:10.913151 1181050 command_runner.go:130] > monitor_cgroup = ""
	I1024 20:01:10.913156 1181050 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 20:01:10.913202 1181050 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 20:01:10.913208 1181050 command_runner.go:130] > # running containers
	I1024 20:01:10.913213 1181050 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 20:01:10.913220 1181050 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 20:01:10.913230 1181050 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 20:01:10.913237 1181050 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 20:01:10.913243 1181050 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 20:01:10.913248 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 20:01:10.913254 1181050 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 20:01:10.913259 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 20:01:10.913265 1181050 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 20:01:10.913270 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 20:01:10.913277 1181050 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 20:01:10.913283 1181050 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 20:01:10.913291 1181050 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 20:01:10.913302 1181050 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 20:01:10.913311 1181050 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 20:01:10.913318 1181050 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 20:01:10.913329 1181050 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 20:01:10.913339 1181050 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 20:01:10.913345 1181050 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 20:01:10.913354 1181050 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 20:01:10.913358 1181050 command_runner.go:130] > # Example:
	I1024 20:01:10.913364 1181050 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 20:01:10.913370 1181050 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 20:01:10.913376 1181050 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 20:01:10.913382 1181050 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 20:01:10.913386 1181050 command_runner.go:130] > # cpuset = 0
	I1024 20:01:10.913391 1181050 command_runner.go:130] > # cpushares = "0-1"
	I1024 20:01:10.913395 1181050 command_runner.go:130] > # Where:
	I1024 20:01:10.913400 1181050 command_runner.go:130] > # The workload name is workload-type.
	I1024 20:01:10.913408 1181050 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 20:01:10.913415 1181050 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 20:01:10.913423 1181050 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 20:01:10.913432 1181050 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 20:01:10.913441 1181050 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 20:01:10.913445 1181050 command_runner.go:130] > # 
	I1024 20:01:10.913453 1181050 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 20:01:10.913457 1181050 command_runner.go:130] > #
	I1024 20:01:10.913466 1181050 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 20:01:10.913473 1181050 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 20:01:10.913480 1181050 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 20:01:10.913488 1181050 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 20:01:10.913495 1181050 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 20:01:10.913499 1181050 command_runner.go:130] > [crio.image]
	I1024 20:01:10.913506 1181050 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 20:01:10.913511 1181050 command_runner.go:130] > # default_transport = "docker://"
	I1024 20:01:10.913519 1181050 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 20:01:10.913526 1181050 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 20:01:10.913663 1181050 command_runner.go:130] > # global_auth_file = ""
	I1024 20:01:10.913693 1181050 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 20:01:10.913711 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:01:10.913722 1181050 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 20:01:10.913730 1181050 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 20:01:10.913767 1181050 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 20:01:10.913781 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:01:10.913787 1181050 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 20:01:10.913799 1181050 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 20:01:10.913806 1181050 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 20:01:10.913818 1181050 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 20:01:10.913825 1181050 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 20:01:10.913842 1181050 command_runner.go:130] > # pause_command = "/pause"
	I1024 20:01:10.913856 1181050 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 20:01:10.913866 1181050 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 20:01:10.913920 1181050 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 20:01:10.913937 1181050 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 20:01:10.913945 1181050 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 20:01:10.913959 1181050 command_runner.go:130] > # signature_policy = ""
	I1024 20:01:10.913967 1181050 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 20:01:10.913993 1181050 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 20:01:10.914000 1181050 command_runner.go:130] > # changing them here.
	I1024 20:01:10.914005 1181050 command_runner.go:130] > # insecure_registries = [
	I1024 20:01:10.914010 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.914028 1181050 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 20:01:10.914042 1181050 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 20:01:10.914050 1181050 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 20:01:10.914061 1181050 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 20:01:10.914067 1181050 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 20:01:10.914075 1181050 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 20:01:10.914079 1181050 command_runner.go:130] > # CNI plugins.
	I1024 20:01:10.914084 1181050 command_runner.go:130] > [crio.network]
	I1024 20:01:10.914103 1181050 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 20:01:10.914117 1181050 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 20:01:10.914123 1181050 command_runner.go:130] > # cni_default_network = ""
	I1024 20:01:10.914135 1181050 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 20:01:10.914141 1181050 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 20:01:10.914151 1181050 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 20:01:10.914160 1181050 command_runner.go:130] > # plugin_dirs = [
	I1024 20:01:10.914178 1181050 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 20:01:10.914194 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.914207 1181050 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 20:01:10.914212 1181050 command_runner.go:130] > [crio.metrics]
	I1024 20:01:10.914223 1181050 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 20:01:10.914228 1181050 command_runner.go:130] > # enable_metrics = false
	I1024 20:01:10.914238 1181050 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 20:01:10.914244 1181050 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 20:01:10.914251 1181050 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 20:01:10.914271 1181050 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 20:01:10.914285 1181050 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 20:01:10.914300 1181050 command_runner.go:130] > # metrics_collectors = [
	I1024 20:01:10.914312 1181050 command_runner.go:130] > # 	"operations",
	I1024 20:01:10.914318 1181050 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 20:01:10.914323 1181050 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 20:01:10.914328 1181050 command_runner.go:130] > # 	"operations_errors",
	I1024 20:01:10.914333 1181050 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 20:01:10.914340 1181050 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 20:01:10.914346 1181050 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 20:01:10.914354 1181050 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 20:01:10.914359 1181050 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 20:01:10.914377 1181050 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 20:01:10.914401 1181050 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 20:01:10.914407 1181050 command_runner.go:130] > # 	"containers_oom_total",
	I1024 20:01:10.914415 1181050 command_runner.go:130] > # 	"containers_oom",
	I1024 20:01:10.914420 1181050 command_runner.go:130] > # 	"processes_defunct",
	I1024 20:01:10.914425 1181050 command_runner.go:130] > # 	"operations_total",
	I1024 20:01:10.914433 1181050 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 20:01:10.914439 1181050 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 20:01:10.914447 1181050 command_runner.go:130] > # 	"operations_errors_total",
	I1024 20:01:10.914452 1181050 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 20:01:10.914468 1181050 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 20:01:10.914483 1181050 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 20:01:10.914501 1181050 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 20:01:10.914514 1181050 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 20:01:10.914524 1181050 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 20:01:10.914531 1181050 command_runner.go:130] > # ]
	I1024 20:01:10.914538 1181050 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 20:01:10.914546 1181050 command_runner.go:130] > # metrics_port = 9090
	I1024 20:01:10.914556 1181050 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 20:01:10.914575 1181050 command_runner.go:130] > # metrics_socket = ""
	I1024 20:01:10.914596 1181050 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 20:01:10.914604 1181050 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 20:01:10.914617 1181050 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 20:01:10.914623 1181050 command_runner.go:130] > # certificate on any modification event.
	I1024 20:01:10.914630 1181050 command_runner.go:130] > # metrics_cert = ""
	I1024 20:01:10.914636 1181050 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 20:01:10.914646 1181050 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 20:01:10.914651 1181050 command_runner.go:130] > # metrics_key = ""
	I1024 20:01:10.914672 1181050 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 20:01:10.914692 1181050 command_runner.go:130] > [crio.tracing]
	I1024 20:01:10.914700 1181050 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 20:01:10.914708 1181050 command_runner.go:130] > # enable_tracing = false
	I1024 20:01:10.914717 1181050 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 20:01:10.914726 1181050 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 20:01:10.914733 1181050 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 20:01:10.914738 1181050 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 20:01:10.914748 1181050 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 20:01:10.914769 1181050 command_runner.go:130] > [crio.stats]
	I1024 20:01:10.914785 1181050 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 20:01:10.914799 1181050 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 20:01:10.914806 1181050 command_runner.go:130] > # stats_collection_period = 0
	I1024 20:01:10.914846 1181050 command_runner.go:130] ! time="2023-10-24 20:01:10.904333306Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1024 20:01:10.914886 1181050 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 20:01:10.914990 1181050 cni.go:84] Creating CNI manager for ""
	I1024 20:01:10.915004 1181050 cni.go:136] 1 nodes found, recommending kindnet
	I1024 20:01:10.915042 1181050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:01:10.915069 1181050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773966 NodeName:multinode-773966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:01:10.915264 1181050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-773966"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:01:10.915355 1181050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-773966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:01:10.915454 1181050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:01:10.925049 1181050 command_runner.go:130] > kubeadm
	I1024 20:01:10.925070 1181050 command_runner.go:130] > kubectl
	I1024 20:01:10.925076 1181050 command_runner.go:130] > kubelet
	I1024 20:01:10.926317 1181050 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:01:10.926402 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 20:01:10.936737 1181050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1024 20:01:10.957341 1181050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:01:10.978198 1181050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1024 20:01:10.999189 1181050 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1024 20:01:11.004312 1181050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:01:11.017591 1181050 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966 for IP: 192.168.58.2
	I1024 20:01:11.017627 1181050 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:11.017838 1181050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 20:01:11.017888 1181050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 20:01:11.017938 1181050 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key
	I1024 20:01:11.017952 1181050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt with IP's: []
	I1024 20:01:11.588514 1181050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt ...
	I1024 20:01:11.588545 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt: {Name:mkca575775baf7b347cec3faea13061bb51a90b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:11.588738 1181050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key ...
	I1024 20:01:11.588752 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key: {Name:mk3290d0af8e9ce1f694c080bee86a0b50f2e348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:11.588837 1181050 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key.cee25041
	I1024 20:01:11.588856 1181050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1024 20:01:11.901226 1181050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt.cee25041 ...
	I1024 20:01:11.901255 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt.cee25041: {Name:mk95d170fc38b20aee7e8d6511c9c822265d0806 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:11.901440 1181050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key.cee25041 ...
	I1024 20:01:11.901457 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key.cee25041: {Name:mk3c77cbb7e4b586930d15e2e72abfac8f9e8ab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:11.901541 1181050 certs.go:337] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt
	I1024 20:01:11.901619 1181050 certs.go:341] copying /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key
	I1024 20:01:11.901677 1181050 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.key
	I1024 20:01:11.901693 1181050 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.crt with IP's: []
	I1024 20:01:12.605733 1181050 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.crt ...
	I1024 20:01:12.605771 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.crt: {Name:mk105c36fc9802046739597a8f2b25b04a69ed2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:12.605957 1181050 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.key ...
	I1024 20:01:12.605968 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.key: {Name:mk00e1d244cbd5586d92d20f864c107f657b4bfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:12.606050 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1024 20:01:12.606104 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1024 20:01:12.606121 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1024 20:01:12.606133 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1024 20:01:12.606147 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 20:01:12.606162 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 20:01:12.606173 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 20:01:12.606192 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 20:01:12.606252 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem (1338 bytes)
	W1024 20:01:12.606289 1181050 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634_empty.pem, impossibly tiny 0 bytes
	I1024 20:01:12.606302 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 20:01:12.606328 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 20:01:12.606355 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:01:12.606386 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 20:01:12.606437 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:01:12.606470 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:01:12.606487 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem -> /usr/share/ca-certificates/1117634.pem
	I1024 20:01:12.606500 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /usr/share/ca-certificates/11176342.pem
	I1024 20:01:12.607130 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 20:01:12.635199 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1024 20:01:12.662594 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 20:01:12.690265 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 20:01:12.717702 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:01:12.744523 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:01:12.771831 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:01:12.799878 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 20:01:12.827734 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:01:12.856711 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem --> /usr/share/ca-certificates/1117634.pem (1338 bytes)
	I1024 20:01:12.884615 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /usr/share/ca-certificates/11176342.pem (1708 bytes)
	I1024 20:01:12.912494 1181050 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 20:01:12.933430 1181050 ssh_runner.go:195] Run: openssl version
	I1024 20:01:12.939965 1181050 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1024 20:01:12.940363 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117634.pem && ln -fs /usr/share/ca-certificates/1117634.pem /etc/ssl/certs/1117634.pem"
	I1024 20:01:12.951798 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117634.pem
	I1024 20:01:12.956307 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 20:01:12.956370 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 20:01:12.956431 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117634.pem
	I1024 20:01:12.965206 1181050 command_runner.go:130] > 51391683
	I1024 20:01:12.965917 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117634.pem /etc/ssl/certs/51391683.0"
	I1024 20:01:12.978330 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176342.pem && ln -fs /usr/share/ca-certificates/11176342.pem /etc/ssl/certs/11176342.pem"
	I1024 20:01:12.989860 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176342.pem
	I1024 20:01:12.994333 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 20:01:12.994617 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 20:01:12.994683 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176342.pem
	I1024 20:01:13.003419 1181050 command_runner.go:130] > 3ec20f2e
	I1024 20:01:13.003522 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11176342.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:01:13.015516 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:01:13.027164 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:01:13.032320 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:01:13.032349 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:01:13.032408 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:01:13.040543 1181050 command_runner.go:130] > b5213941
	I1024 20:01:13.041000 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:01:13.052237 1181050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:01:13.056508 1181050 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 20:01:13.056553 1181050 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 20:01:13.056590 1181050 kubeadm.go:404] StartCluster: {Name:multinode-773966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:01:13.056666 1181050 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1024 20:01:13.056722 1181050 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1024 20:01:13.103487 1181050 cri.go:89] found id: ""
	I1024 20:01:13.103571 1181050 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 20:01:13.113849 1181050 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1024 20:01:13.113874 1181050 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1024 20:01:13.113882 1181050 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1024 20:01:13.113952 1181050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 20:01:13.124223 1181050 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1024 20:01:13.124334 1181050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 20:01:13.134650 1181050 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1024 20:01:13.134675 1181050 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1024 20:01:13.134684 1181050 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1024 20:01:13.134711 1181050 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:01:13.134744 1181050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 20:01:13.134788 1181050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1024 20:01:13.185429 1181050 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 20:01:13.185457 1181050 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1024 20:01:13.185879 1181050 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 20:01:13.185902 1181050 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 20:01:13.232072 1181050 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1024 20:01:13.232114 1181050 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1024 20:01:13.232168 1181050 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1048-aws
	I1024 20:01:13.232179 1181050 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-aws
	I1024 20:01:13.232210 1181050 kubeadm.go:322] OS: Linux
	I1024 20:01:13.232218 1181050 command_runner.go:130] > OS: Linux
	I1024 20:01:13.232260 1181050 kubeadm.go:322] CGROUPS_CPU: enabled
	I1024 20:01:13.232269 1181050 command_runner.go:130] > CGROUPS_CPU: enabled
	I1024 20:01:13.232313 1181050 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1024 20:01:13.232321 1181050 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1024 20:01:13.232364 1181050 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1024 20:01:13.232373 1181050 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1024 20:01:13.232417 1181050 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1024 20:01:13.232426 1181050 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1024 20:01:13.232471 1181050 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1024 20:01:13.232479 1181050 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1024 20:01:13.232523 1181050 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1024 20:01:13.232532 1181050 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1024 20:01:13.232573 1181050 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1024 20:01:13.232583 1181050 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1024 20:01:13.232628 1181050 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1024 20:01:13.232636 1181050 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1024 20:01:13.232679 1181050 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1024 20:01:13.232690 1181050 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1024 20:01:13.309302 1181050 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:01:13.309338 1181050 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 20:01:13.309429 1181050 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:01:13.309438 1181050 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 20:01:13.309536 1181050 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:01:13.309579 1181050 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 20:01:13.554125 1181050 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:01:13.558353 1181050 out.go:204]   - Generating certificates and keys ...
	I1024 20:01:13.554284 1181050 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 20:01:13.558453 1181050 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 20:01:13.558470 1181050 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1024 20:01:13.558536 1181050 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 20:01:13.558546 1181050 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1024 20:01:13.780880 1181050 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 20:01:13.780905 1181050 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1024 20:01:14.089792 1181050 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1024 20:01:14.089864 1181050 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1024 20:01:15.038615 1181050 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1024 20:01:15.038647 1181050 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1024 20:01:15.793215 1181050 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1024 20:01:15.793246 1181050 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1024 20:01:16.205456 1181050 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1024 20:01:16.205486 1181050 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1024 20:01:16.205835 1181050 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-773966] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 20:01:16.205854 1181050 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-773966] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 20:01:16.495977 1181050 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1024 20:01:16.496006 1181050 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1024 20:01:16.496409 1181050 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-773966] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 20:01:16.496429 1181050 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-773966] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1024 20:01:17.200247 1181050 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 20:01:17.200273 1181050 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1024 20:01:17.804614 1181050 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 20:01:17.804679 1181050 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1024 20:01:18.076158 1181050 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1024 20:01:18.076182 1181050 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1024 20:01:18.076483 1181050 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:01:18.076496 1181050 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 20:01:18.520262 1181050 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:01:18.520286 1181050 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 20:01:18.768206 1181050 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:01:18.768230 1181050 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 20:01:19.316367 1181050 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:01:19.316392 1181050 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 20:01:20.112275 1181050 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:01:20.112303 1181050 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 20:01:20.112963 1181050 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:01:20.112977 1181050 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 20:01:20.115781 1181050 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:01:20.118147 1181050 out.go:204]   - Booting up control plane ...
	I1024 20:01:20.115868 1181050 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 20:01:20.118237 1181050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:01:20.118246 1181050 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 20:01:20.118359 1181050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:01:20.118366 1181050 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 20:01:20.118890 1181050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:01:20.118903 1181050 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 20:01:20.129513 1181050 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:01:20.129536 1181050 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:01:20.131016 1181050 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:01:20.131038 1181050 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:01:20.131288 1181050 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 20:01:20.131317 1181050 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 20:01:20.237269 1181050 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:01:20.237293 1181050 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 20:01:27.739855 1181050 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502672 seconds
	I1024 20:01:27.739880 1181050 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502672 seconds
	I1024 20:01:27.739979 1181050 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:01:27.739984 1181050 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 20:01:27.753202 1181050 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:01:27.753224 1181050 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 20:01:28.278854 1181050 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:01:28.278877 1181050 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1024 20:01:28.279060 1181050 kubeadm.go:322] [mark-control-plane] Marking the node multinode-773966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 20:01:28.279082 1181050 command_runner.go:130] > [mark-control-plane] Marking the node multinode-773966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 20:01:28.790617 1181050 kubeadm.go:322] [bootstrap-token] Using token: 1x0gyh.yg05itu118n6xxds
	I1024 20:01:28.793002 1181050 out.go:204]   - Configuring RBAC rules ...
	I1024 20:01:28.790722 1181050 command_runner.go:130] > [bootstrap-token] Using token: 1x0gyh.yg05itu118n6xxds
	I1024 20:01:28.793134 1181050 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:01:28.793149 1181050 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 20:01:28.797571 1181050 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 20:01:28.797593 1181050 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 20:01:28.805853 1181050 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:01:28.805889 1181050 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 20:01:28.809581 1181050 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:01:28.809606 1181050 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 20:01:28.814828 1181050 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:01:28.814849 1181050 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 20:01:28.818590 1181050 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:01:28.818617 1181050 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 20:01:28.837102 1181050 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 20:01:28.837133 1181050 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 20:01:29.089110 1181050 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 20:01:29.089136 1181050 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1024 20:01:29.224361 1181050 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 20:01:29.224387 1181050 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1024 20:01:29.224394 1181050 kubeadm.go:322] 
	I1024 20:01:29.224451 1181050 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 20:01:29.224459 1181050 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1024 20:01:29.224464 1181050 kubeadm.go:322] 
	I1024 20:01:29.224547 1181050 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 20:01:29.224557 1181050 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1024 20:01:29.224561 1181050 kubeadm.go:322] 
	I1024 20:01:29.224586 1181050 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 20:01:29.224594 1181050 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1024 20:01:29.224650 1181050 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:01:29.224658 1181050 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 20:01:29.224719 1181050 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:01:29.224729 1181050 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 20:01:29.224735 1181050 kubeadm.go:322] 
	I1024 20:01:29.224788 1181050 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 20:01:29.224797 1181050 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1024 20:01:29.224801 1181050 kubeadm.go:322] 
	I1024 20:01:29.224846 1181050 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 20:01:29.224853 1181050 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 20:01:29.224857 1181050 kubeadm.go:322] 
	I1024 20:01:29.224907 1181050 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 20:01:29.224915 1181050 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1024 20:01:29.224985 1181050 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:01:29.225001 1181050 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 20:01:29.225064 1181050 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:01:29.225073 1181050 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 20:01:29.225077 1181050 kubeadm.go:322] 
	I1024 20:01:29.225155 1181050 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 20:01:29.225163 1181050 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1024 20:01:29.225237 1181050 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 20:01:29.225246 1181050 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1024 20:01:29.225250 1181050 kubeadm.go:322] 
	I1024 20:01:29.225328 1181050 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 1x0gyh.yg05itu118n6xxds \
	I1024 20:01:29.225336 1181050 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 1x0gyh.yg05itu118n6xxds \
	I1024 20:01:29.225432 1181050 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 20:01:29.225440 1181050 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 \
	I1024 20:01:29.225459 1181050 kubeadm.go:322] 	--control-plane 
	I1024 20:01:29.225467 1181050 command_runner.go:130] > 	--control-plane 
	I1024 20:01:29.225471 1181050 kubeadm.go:322] 
	I1024 20:01:29.225550 1181050 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:01:29.225556 1181050 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1024 20:01:29.225560 1181050 kubeadm.go:322] 
	I1024 20:01:29.225637 1181050 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 1x0gyh.yg05itu118n6xxds \
	I1024 20:01:29.225645 1181050 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1x0gyh.yg05itu118n6xxds \
	I1024 20:01:29.225843 1181050 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 20:01:29.225857 1181050 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 20:01:29.227432 1181050 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 20:01:29.227461 1181050 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 20:01:29.227717 1181050 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:01:29.227740 1181050 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:01:29.227756 1181050 cni.go:84] Creating CNI manager for ""
	I1024 20:01:29.227770 1181050 cni.go:136] 1 nodes found, recommending kindnet
	I1024 20:01:29.229955 1181050 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1024 20:01:29.231852 1181050 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 20:01:29.240690 1181050 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 20:01:29.240719 1181050 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1024 20:01:29.240728 1181050 command_runner.go:130] > Device: 3ah/58d	Inode: 1573330     Links: 1
	I1024 20:01:29.240736 1181050 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:01:29.240742 1181050 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1024 20:01:29.240754 1181050 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1024 20:01:29.240761 1181050 command_runner.go:130] > Change: 2023-10-24 19:24:00.149156281 +0000
	I1024 20:01:29.240773 1181050 command_runner.go:130] >  Birth: 2023-10-24 19:24:00.101156460 +0000
	I1024 20:01:29.241656 1181050 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 20:01:29.241671 1181050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 20:01:29.301549 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 20:01:30.158084 1181050 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1024 20:01:30.165292 1181050 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1024 20:01:30.178521 1181050 command_runner.go:130] > serviceaccount/kindnet created
	I1024 20:01:30.195801 1181050 command_runner.go:130] > daemonset.apps/kindnet created
	I1024 20:01:30.201320 1181050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 20:01:30.201458 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:30.201538 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=multinode-773966 minikube.k8s.io/updated_at=2023_10_24T20_01_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:30.366250 1181050 command_runner.go:130] > node/multinode-773966 labeled
	I1024 20:01:30.369775 1181050 command_runner.go:130] > -16
	I1024 20:01:30.369814 1181050 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1024 20:01:30.369837 1181050 ops.go:34] apiserver oom_adj: -16
	I1024 20:01:30.369907 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:30.468515 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:30.468605 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:30.559803 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:31.064224 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:31.151935 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:31.563731 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:31.653579 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:32.063937 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:32.155275 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:32.563830 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:32.658496 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:33.063703 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:33.158968 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:33.564446 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:33.659623 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:34.064284 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:34.151629 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:34.563962 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:34.650271 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:35.063629 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:35.153174 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:35.563634 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:35.654111 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:36.063739 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:36.155265 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:36.564114 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:36.655005 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:37.064469 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:37.154660 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:37.563946 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:37.659314 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:38.063813 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:38.154376 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:38.563801 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:38.657605 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:39.064223 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:39.152496 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:39.564025 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:39.655488 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:40.063820 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:40.158971 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:40.563755 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:40.651848 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:41.064179 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:41.187811 1181050 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1024 20:01:41.564167 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 20:01:41.696971 1181050 command_runner.go:130] > NAME      SECRETS   AGE
	I1024 20:01:41.696991 1181050 command_runner.go:130] > default   0         0s
	I1024 20:01:41.701063 1181050 kubeadm.go:1081] duration metric: took 11.499650563s to wait for elevateKubeSystemPrivileges.
	I1024 20:01:41.701092 1181050 kubeadm.go:406] StartCluster complete in 28.64450483s
	I1024 20:01:41.701108 1181050 settings.go:142] acquiring lock: {Name:mkaa82b52e1ee562b451304e36332812fcccf981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:41.701165 1181050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:01:41.702007 1181050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-1112248/kubeconfig: {Name:mkcb958baf0d06a87d3e11266d914b0c86b46ec2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:01:41.702533 1181050 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:01:41.702815 1181050 kapi.go:59] client config for multinode-773966: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 20:01:41.703972 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 20:01:41.703993 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:41.704004 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:41.704011 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:41.704221 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 20:01:41.704809 1181050 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 20:01:41.704896 1181050 addons.go:69] Setting storage-provisioner=true in profile "multinode-773966"
	I1024 20:01:41.704919 1181050 addons.go:231] Setting addon storage-provisioner=true in "multinode-773966"
	I1024 20:01:41.704988 1181050 host.go:66] Checking if "multinode-773966" exists ...
	I1024 20:01:41.705460 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:41.705618 1181050 cert_rotation.go:137] Starting client certificate rotation controller
	I1024 20:01:41.705820 1181050 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:01:41.705863 1181050 addons.go:69] Setting default-storageclass=true in profile "multinode-773966"
	I1024 20:01:41.705877 1181050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-773966"
	I1024 20:01:41.706108 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:41.767074 1181050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 20:01:41.766033 1181050 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:01:41.769191 1181050 kapi.go:59] client config for multinode-773966: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 20:01:41.769478 1181050 addons.go:231] Setting addon default-storageclass=true in "multinode-773966"
	I1024 20:01:41.769509 1181050 host.go:66] Checking if "multinode-773966" exists ...
	I1024 20:01:41.769996 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:01:41.770242 1181050 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:01:41.770259 1181050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 20:01:41.770307 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:41.797920 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:41.834011 1181050 round_trippers.go:574] Response Status: 200 OK in 129 milliseconds
	I1024 20:01:41.834035 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:41.834044 1181050 round_trippers.go:580]     Audit-Id: c6c30074-4d97-40bb-851c-8e9b7a0cb794
	I1024 20:01:41.834050 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:41.834057 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:41.834063 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:41.834069 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:41.834075 1181050 round_trippers.go:580]     Content-Length: 291
	I1024 20:01:41.834082 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:41 GMT
	I1024 20:01:41.839208 1181050 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 20:01:41.839228 1181050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 20:01:41.839288 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:01:41.861797 1181050 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"880e54a6-86b9-4b4b-bfb6-0a1742a3b535","resourceVersion":"233","creationTimestamp":"2023-10-24T20:01:29Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 20:01:41.862255 1181050 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"880e54a6-86b9-4b4b-bfb6-0a1742a3b535","resourceVersion":"233","creationTimestamp":"2023-10-24T20:01:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 20:01:41.862311 1181050 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 20:01:41.862317 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:41.862326 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:41.862332 1181050 round_trippers.go:473]     Content-Type: application/json
	I1024 20:01:41.862339 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:41.881853 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:01:41.893311 1181050 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I1024 20:01:41.893353 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:41.893363 1181050 round_trippers.go:580]     Content-Length: 291
	I1024 20:01:41.893370 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:41 GMT
	I1024 20:01:41.893376 1181050 round_trippers.go:580]     Audit-Id: 4210d440-8e3d-4de3-9480-a24649146222
	I1024 20:01:41.893382 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:41.893388 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:41.893398 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:41.893404 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:41.894571 1181050 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"880e54a6-86b9-4b4b-bfb6-0a1742a3b535","resourceVersion":"326","creationTimestamp":"2023-10-24T20:01:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 20:01:41.894736 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 20:01:41.894751 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:41.894760 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:41.894766 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:41.965705 1181050 round_trippers.go:574] Response Status: 200 OK in 70 milliseconds
	I1024 20:01:41.965749 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:41.965758 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:41.965766 1181050 round_trippers.go:580]     Content-Length: 291
	I1024 20:01:41.965772 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:41 GMT
	I1024 20:01:41.965778 1181050 round_trippers.go:580]     Audit-Id: 14510370-4342-4c82-8974-07d6e4dfb90d
	I1024 20:01:41.965784 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:41.965790 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:41.965796 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:41.970951 1181050 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"880e54a6-86b9-4b4b-bfb6-0a1742a3b535","resourceVersion":"326","creationTimestamp":"2023-10-24T20:01:29Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1024 20:01:41.971087 1181050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773966" context rescaled to 1 replicas
	I1024 20:01:41.971120 1181050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1024 20:01:41.973599 1181050 out.go:177] * Verifying Kubernetes components...
	I1024 20:01:41.975814 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:01:41.996194 1181050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 20:01:42.041692 1181050 command_runner.go:130] > apiVersion: v1
	I1024 20:01:42.041714 1181050 command_runner.go:130] > data:
	I1024 20:01:42.041720 1181050 command_runner.go:130] >   Corefile: |
	I1024 20:01:42.041725 1181050 command_runner.go:130] >     .:53 {
	I1024 20:01:42.041730 1181050 command_runner.go:130] >         errors
	I1024 20:01:42.041830 1181050 command_runner.go:130] >         health {
	I1024 20:01:42.041842 1181050 command_runner.go:130] >            lameduck 5s
	I1024 20:01:42.041847 1181050 command_runner.go:130] >         }
	I1024 20:01:42.041852 1181050 command_runner.go:130] >         ready
	I1024 20:01:42.041868 1181050 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1024 20:01:42.041878 1181050 command_runner.go:130] >            pods insecure
	I1024 20:01:42.041885 1181050 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1024 20:01:42.041909 1181050 command_runner.go:130] >            ttl 30
	I1024 20:01:42.041918 1181050 command_runner.go:130] >         }
	I1024 20:01:42.041924 1181050 command_runner.go:130] >         prometheus :9153
	I1024 20:01:42.041933 1181050 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1024 20:01:42.041939 1181050 command_runner.go:130] >            max_concurrent 1000
	I1024 20:01:42.041944 1181050 command_runner.go:130] >         }
	I1024 20:01:42.041952 1181050 command_runner.go:130] >         cache 30
	I1024 20:01:42.041957 1181050 command_runner.go:130] >         loop
	I1024 20:01:42.041965 1181050 command_runner.go:130] >         reload
	I1024 20:01:42.041970 1181050 command_runner.go:130] >         loadbalance
	I1024 20:01:42.041980 1181050 command_runner.go:130] >     }
	I1024 20:01:42.041985 1181050 command_runner.go:130] > kind: ConfigMap
	I1024 20:01:42.041989 1181050 command_runner.go:130] > metadata:
	I1024 20:01:42.042000 1181050 command_runner.go:130] >   creationTimestamp: "2023-10-24T20:01:29Z"
	I1024 20:01:42.042005 1181050 command_runner.go:130] >   name: coredns
	I1024 20:01:42.042013 1181050 command_runner.go:130] >   namespace: kube-system
	I1024 20:01:42.042019 1181050 command_runner.go:130] >   resourceVersion: "229"
	I1024 20:01:42.042028 1181050 command_runner.go:130] >   uid: f6db6650-9743-49cc-af43-f3eb266b9b36
	I1024 20:01:42.045864 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 20:01:42.046333 1181050 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:01:42.046621 1181050 kapi.go:59] client config for multinode-773966: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 20:01:42.048947 1181050 node_ready.go:35] waiting up to 6m0s for node "multinode-773966" to be "Ready" ...
	I1024 20:01:42.049069 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:42.049080 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:42.049090 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:42.049103 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:42.094673 1181050 round_trippers.go:574] Response Status: 200 OK in 45 milliseconds
	I1024 20:01:42.094747 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:42.094770 1181050 round_trippers.go:580]     Audit-Id: 338ab6f8-a086-4857-a12e-a603b83c1b37
	I1024 20:01:42.094789 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:42.094819 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:42.094846 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:42.094868 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:42.094899 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:42 GMT
	I1024 20:01:42.095403 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:42.096384 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:42.096409 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:42.096459 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:42.096480 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:42.146337 1181050 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I1024 20:01:42.146408 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:42.146431 1181050 round_trippers.go:580]     Audit-Id: ecd58fdc-c276-422c-9fad-bea5b025726e
	I1024 20:01:42.146450 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:42.146488 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:42.146516 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:42.146542 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:42.146571 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:42 GMT
	I1024 20:01:42.149411 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:42.182223 1181050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 20:01:42.650976 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:42.651000 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:42.651047 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:42.651062 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:42.699214 1181050 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I1024 20:01:42.699240 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:42.699253 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:42.699280 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:42.699302 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:42.699315 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:42 GMT
	I1024 20:01:42.699322 1181050 round_trippers.go:580]     Audit-Id: 5bd449a0-730f-4fa6-8ebe-fcdbb9ad1a1d
	I1024 20:01:42.699333 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:42.700662 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:43.033992 1181050 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1024 20:01:43.042252 1181050 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1024 20:01:43.053843 1181050 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 20:01:43.065649 1181050 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1024 20:01:43.075718 1181050 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1024 20:01:43.088173 1181050 command_runner.go:130] > pod/storage-provisioner created
	I1024 20:01:43.094393 1181050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.098161459s)
	I1024 20:01:43.094480 1181050 command_runner.go:130] > configmap/coredns replaced
	I1024 20:01:43.094567 1181050 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.048675935s)
	I1024 20:01:43.094597 1181050 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1024 20:01:43.094654 1181050 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1024 20:01:43.094798 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1024 20:01:43.094823 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:43.094842 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:43.094860 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:43.105444 1181050 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1024 20:01:43.105512 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:43.105534 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:43.105554 1181050 round_trippers.go:580]     Content-Length: 1273
	I1024 20:01:43.105590 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:43 GMT
	I1024 20:01:43.105612 1181050 round_trippers.go:580]     Audit-Id: feb7ff9f-c0cc-4440-a117-f796edeef3fa
	I1024 20:01:43.105632 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:43.105653 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:43.105671 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:43.108729 1181050 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"369"},"items":[{"metadata":{"name":"standard","uid":"085d1da7-8e91-44c4-af3b-e095f8607431","resourceVersion":"362","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1024 20:01:43.109250 1181050 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"085d1da7-8e91-44c4-af3b-e095f8607431","resourceVersion":"362","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 20:01:43.109354 1181050 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1024 20:01:43.109412 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:43.109438 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:43.109459 1181050 round_trippers.go:473]     Content-Type: application/json
	I1024 20:01:43.109489 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:43.120431 1181050 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1024 20:01:43.120497 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:43.120518 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:43.120536 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:43.120556 1181050 round_trippers.go:580]     Content-Length: 1220
	I1024 20:01:43.120600 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:43 GMT
	I1024 20:01:43.120618 1181050 round_trippers.go:580]     Audit-Id: 632ad34c-f15e-4bdd-ae3c-933b1f5ff543
	I1024 20:01:43.120636 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:43.120654 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:43.120978 1181050 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"085d1da7-8e91-44c4-af3b-e095f8607431","resourceVersion":"362","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1024 20:01:43.123465 1181050 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1024 20:01:43.125672 1181050 addons.go:502] enable addons completed in 1.420852343s: enabled=[storage-provisioner default-storageclass]
	I1024 20:01:43.150715 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:43.150740 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:43.150751 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:43.150759 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:43.153380 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:43.153445 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:43.153468 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:43.153487 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:43 GMT
	I1024 20:01:43.153522 1181050 round_trippers.go:580]     Audit-Id: 26048a05-2be0-4fd6-ae38-1b88b3931f44
	I1024 20:01:43.153547 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:43.153567 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:43.153599 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:43.153879 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:43.650471 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:43.650497 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:43.650508 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:43.650515 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:43.653147 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:43.653221 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:43.653230 1181050 round_trippers.go:580]     Audit-Id: 678a6a54-2269-4acb-ba7a-45fc6315dfe8
	I1024 20:01:43.653238 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:43.653244 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:43.653251 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:43.653257 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:43.653288 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:43 GMT
	I1024 20:01:43.653402 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:44.151033 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:44.151056 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:44.151065 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:44.151073 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:44.154185 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:01:44.154314 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:44.154331 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:44.154339 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:44.154346 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:44.154362 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:44.154371 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:44 GMT
	I1024 20:01:44.154382 1181050 round_trippers.go:580]     Audit-Id: e1b54718-055e-467f-a999-048d37a05d7e
	I1024 20:01:44.154541 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:44.154967 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:44.651075 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:44.651136 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:44.651152 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:44.651159 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:44.653688 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:44.653787 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:44.653803 1181050 round_trippers.go:580]     Audit-Id: 86fec595-017a-4e6e-b9fc-cf4ba4973d3c
	I1024 20:01:44.653810 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:44.653822 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:44.653836 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:44.653846 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:44.653852 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:44 GMT
	I1024 20:01:44.653940 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:45.150466 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:45.150491 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:45.150501 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:45.150509 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:45.153253 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:45.153321 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:45.153342 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:45.153362 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:45.153396 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:45.153420 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:45.153441 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:45 GMT
	I1024 20:01:45.153463 1181050 round_trippers.go:580]     Audit-Id: 075b6806-0d08-4d91-9a43-42fd103f7099
	I1024 20:01:45.153678 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:45.650156 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:45.650179 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:45.650189 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:45.650196 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:45.652571 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:45.652594 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:45.652602 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:45.652609 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:45.652616 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:45 GMT
	I1024 20:01:45.652622 1181050 round_trippers.go:580]     Audit-Id: d51af707-81b3-4889-a687-1d920791b0d6
	I1024 20:01:45.652629 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:45.652637 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:45.652967 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:46.151116 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:46.151139 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:46.151149 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:46.151157 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:46.153782 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:46.153844 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:46.153865 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:46 GMT
	I1024 20:01:46.153883 1181050 round_trippers.go:580]     Audit-Id: 0ccd8549-d736-4292-9221-211bbf8deece
	I1024 20:01:46.153904 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:46.153923 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:46.153949 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:46.153967 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:46.154146 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:46.650194 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:46.650221 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:46.650235 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:46.650247 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:46.652964 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:46.652999 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:46.653007 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:46.653019 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:46.653025 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:46.653043 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:46 GMT
	I1024 20:01:46.653059 1181050 round_trippers.go:580]     Audit-Id: e55b3452-3b33-4aca-971b-07a7b5a12ea2
	I1024 20:01:46.653068 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:46.653210 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:46.653651 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:47.151006 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:47.151033 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:47.151043 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:47.151051 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:47.153679 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:47.153704 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:47.153713 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:47 GMT
	I1024 20:01:47.153720 1181050 round_trippers.go:580]     Audit-Id: 8ac28e19-b8fb-4365-baf3-76dfb04a772a
	I1024 20:01:47.153766 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:47.153781 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:47.153788 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:47.153802 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:47.153991 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:47.651121 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:47.651143 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:47.651157 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:47.651165 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:47.653755 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:47.653809 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:47.653847 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:47.653876 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:47.653889 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:47 GMT
	I1024 20:01:47.653896 1181050 round_trippers.go:580]     Audit-Id: 6e75f4d9-5835-414c-a633-df2db8828407
	I1024 20:01:47.653902 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:47.653918 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:47.654035 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:48.150402 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:48.150426 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:48.150437 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:48.150445 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:48.153083 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:48.153104 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:48.153113 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:48.153120 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:48 GMT
	I1024 20:01:48.153126 1181050 round_trippers.go:580]     Audit-Id: 71d57573-d98d-4fec-98e2-40058a5025af
	I1024 20:01:48.153132 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:48.153138 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:48.153146 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:48.153287 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:48.650187 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:48.650209 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:48.650219 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:48.650226 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:48.653052 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:48.653076 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:48.653086 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:48.653093 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:48.653099 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:48 GMT
	I1024 20:01:48.653105 1181050 round_trippers.go:580]     Audit-Id: 9c25b232-7238-448d-b81c-d94d9ee69b83
	I1024 20:01:48.653111 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:48.653128 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:48.653376 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:48.653803 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:49.150502 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:49.150524 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:49.150534 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:49.150541 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:49.153073 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:49.153099 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:49.153108 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:49 GMT
	I1024 20:01:49.153115 1181050 round_trippers.go:580]     Audit-Id: 0bc7b871-329a-4d1b-9e12-28b3860f8f13
	I1024 20:01:49.153121 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:49.153127 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:49.153133 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:49.153140 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:49.153268 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:49.650145 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:49.650167 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:49.650178 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:49.650185 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:49.652743 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:49.652764 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:49.652772 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:49 GMT
	I1024 20:01:49.652779 1181050 round_trippers.go:580]     Audit-Id: 8e23ec5a-0bc2-401f-8a07-766dfbdf6b6f
	I1024 20:01:49.652785 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:49.652791 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:49.652798 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:49.652804 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:49.652918 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:50.150165 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:50.150192 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:50.150202 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:50.150209 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:50.152752 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:50.152779 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:50.152788 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:50 GMT
	I1024 20:01:50.152795 1181050 round_trippers.go:580]     Audit-Id: 7e88d9b9-586e-4617-9f6c-3fa91586a6be
	I1024 20:01:50.152801 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:50.152808 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:50.152814 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:50.152821 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:50.153186 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:50.650290 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:50.650312 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:50.650321 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:50.650329 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:50.652762 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:50.652782 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:50.652791 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:50.652797 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:50 GMT
	I1024 20:01:50.652804 1181050 round_trippers.go:580]     Audit-Id: 3b8df044-23fa-40f4-90f6-62beaf5eb331
	I1024 20:01:50.652810 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:50.652820 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:50.652827 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:50.653163 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:51.151100 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:51.151124 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:51.151134 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:51.151149 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:51.153721 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:51.153760 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:51.153769 1181050 round_trippers.go:580]     Audit-Id: d84eb1a6-d086-41bc-ab10-4711d09618a9
	I1024 20:01:51.153775 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:51.153782 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:51.153788 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:51.153795 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:51.153804 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:51 GMT
	I1024 20:01:51.154308 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:51.154714 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:51.650164 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:51.650195 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:51.650210 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:51.650238 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:51.652983 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:51.653012 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:51.653022 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:51.653029 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:51.653035 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:51 GMT
	I1024 20:01:51.653041 1181050 round_trippers.go:580]     Audit-Id: 194f4dd0-47c1-495d-ba6a-3c427067ae4a
	I1024 20:01:51.653047 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:51.653053 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:51.653173 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:52.150984 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:52.151007 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:52.151017 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:52.151023 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:52.153521 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:52.153548 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:52.153557 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:52.153564 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:52.153570 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:52.153576 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:52.153583 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:52 GMT
	I1024 20:01:52.153590 1181050 round_trippers.go:580]     Audit-Id: a469e646-2a51-4299-8801-dd204a36e7c3
	I1024 20:01:52.153718 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:52.650925 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:52.650954 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:52.650966 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:52.650976 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:52.653418 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:52.653441 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:52.653449 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:52 GMT
	I1024 20:01:52.653456 1181050 round_trippers.go:580]     Audit-Id: 36b718b4-9dbf-421b-9007-02e29b834de9
	I1024 20:01:52.653462 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:52.653468 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:52.653475 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:52.653486 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:52.653594 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:53.150800 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:53.150824 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:53.150834 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:53.150841 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:53.153788 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:53.153815 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:53.153827 1181050 round_trippers.go:580]     Audit-Id: cc99765d-6ee2-4dcb-a339-a65dcc249a13
	I1024 20:01:53.153840 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:53.153847 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:53.153853 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:53.153860 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:53.153866 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:53 GMT
	I1024 20:01:53.154376 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:53.154839 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:53.650522 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:53.650548 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:53.650559 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:53.650566 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:53.653279 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:53.653304 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:53.653313 1181050 round_trippers.go:580]     Audit-Id: 0788ac21-7dfd-4f4f-8266-da8e36823ad1
	I1024 20:01:53.653319 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:53.653325 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:53.653332 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:53.653338 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:53.653348 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:53 GMT
	I1024 20:01:53.653444 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:54.150098 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:54.150121 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:54.150131 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:54.150138 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:54.152751 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:54.152773 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:54.152782 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:54.152789 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:54 GMT
	I1024 20:01:54.152796 1181050 round_trippers.go:580]     Audit-Id: d10c8901-43ef-4684-a3d2-8ca22b9df713
	I1024 20:01:54.152802 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:54.152815 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:54.152821 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:54.152950 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:54.650399 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:54.650419 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:54.650430 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:54.650438 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:54.652912 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:54.652938 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:54.652947 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:54 GMT
	I1024 20:01:54.652954 1181050 round_trippers.go:580]     Audit-Id: 1df96831-a09f-4adf-955d-ae7886e6f487
	I1024 20:01:54.652961 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:54.652967 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:54.652973 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:54.652979 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:54.653140 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:55.150143 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:55.150186 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:55.150197 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:55.150204 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:55.152832 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:55.152853 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:55.152862 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:55 GMT
	I1024 20:01:55.152868 1181050 round_trippers.go:580]     Audit-Id: 63a88383-daef-4095-97d6-3d70adbd5264
	I1024 20:01:55.152874 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:55.152882 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:55.152888 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:55.152894 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:55.153035 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:55.650685 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:55.650711 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:55.650721 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:55.650729 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:55.653345 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:55.653366 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:55.653375 1181050 round_trippers.go:580]     Audit-Id: 289d52d3-0cc5-489e-b106-240af7585a86
	I1024 20:01:55.653386 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:55.653392 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:55.653398 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:55.653405 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:55.653411 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:55 GMT
	I1024 20:01:55.653509 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:55.653928 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:56.150679 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:56.150719 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:56.150730 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:56.150737 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:56.153403 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:56.153423 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:56.153431 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:56.153438 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:56.153444 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:56.153462 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:56 GMT
	I1024 20:01:56.153468 1181050 round_trippers.go:580]     Audit-Id: cc286e70-1650-4033-b4b6-21dcc075ded5
	I1024 20:01:56.153474 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:56.153587 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:56.650329 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:56.650352 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:56.650363 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:56.650370 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:56.652979 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:56.653002 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:56.653011 1181050 round_trippers.go:580]     Audit-Id: 027dadbb-27ec-469c-bdd7-3109203e620d
	I1024 20:01:56.653024 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:56.653032 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:56.653038 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:56.653045 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:56.653055 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:56 GMT
	I1024 20:01:56.653146 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:57.150176 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:57.150204 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:57.150217 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:57.150227 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:57.152843 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:57.152865 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:57.152874 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:57.152880 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:57.152886 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:57.152893 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:57.152899 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:57 GMT
	I1024 20:01:57.152905 1181050 round_trippers.go:580]     Audit-Id: a8a03480-d725-4a16-88d2-0bd86b9c8c80
	I1024 20:01:57.153021 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:57.650076 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:57.650099 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:57.650110 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:57.650117 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:57.652589 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:57.652612 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:57.652621 1181050 round_trippers.go:580]     Audit-Id: c3e7e12c-79f9-4929-b5aa-8335689712fa
	I1024 20:01:57.652627 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:57.652633 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:57.652639 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:57.652646 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:57.652652 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:57 GMT
	I1024 20:01:57.652772 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:58.150898 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:58.150922 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:58.150933 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:58.150940 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:58.153932 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:58.153951 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:58.153960 1181050 round_trippers.go:580]     Audit-Id: 4a806197-709b-4360-b96c-0aac4a609d9d
	I1024 20:01:58.153966 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:58.153972 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:58.153979 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:58.153985 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:58.153992 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:58 GMT
	I1024 20:01:58.154289 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:58.154679 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:01:58.651091 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:58.651114 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:58.651123 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:58.651131 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:58.653477 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:58.653499 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:58.653507 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:58.653519 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:58.653526 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:58.653532 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:58.653543 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:58 GMT
	I1024 20:01:58.653555 1181050 round_trippers.go:580]     Audit-Id: 31cd71a6-3996-4fbe-9520-979864de1e03
	I1024 20:01:58.653961 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:59.151112 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:59.151139 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:59.151149 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:59.151156 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:59.153854 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:59.153877 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:59.153885 1181050 round_trippers.go:580]     Audit-Id: 8cc03719-c9b0-4ac0-9d6a-e915809b261d
	I1024 20:01:59.153892 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:59.153898 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:59.153905 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:59.153914 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:59.153927 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:59 GMT
	I1024 20:01:59.154205 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:01:59.650171 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:01:59.650195 1181050 round_trippers.go:469] Request Headers:
	I1024 20:01:59.650205 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:01:59.650213 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:01:59.652565 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:01:59.652582 1181050 round_trippers.go:577] Response Headers:
	I1024 20:01:59.652590 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:01:59 GMT
	I1024 20:01:59.652596 1181050 round_trippers.go:580]     Audit-Id: 07aa7156-124b-455b-b7d6-155d3c2120ff
	I1024 20:01:59.652602 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:01:59.652608 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:01:59.652615 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:01:59.652621 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:01:59.652768 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:00.150224 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:00.150253 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:00.150263 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:00.150271 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:00.152961 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:00.152991 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:00.153002 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:00.153009 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:00 GMT
	I1024 20:02:00.153015 1181050 round_trippers.go:580]     Audit-Id: 85ec012b-e25a-4b91-bde6-47131ddef6fa
	I1024 20:02:00.153021 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:00.153027 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:00.153038 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:00.153376 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:00.650177 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:00.650202 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:00.650212 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:00.650220 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:00.652691 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:00.652716 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:00.652724 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:00.652732 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:00 GMT
	I1024 20:02:00.652738 1181050 round_trippers.go:580]     Audit-Id: 251e52ad-5d69-4da1-b5eb-0f9953b571a0
	I1024 20:02:00.652744 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:00.652750 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:00.652757 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:00.652937 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:00.653332 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:01.150337 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:01.150363 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:01.150373 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:01.150380 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:01.153120 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:01.153147 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:01.153156 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:01.153163 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:01.153170 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:01.153176 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:01.153183 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:01 GMT
	I1024 20:02:01.153193 1181050 round_trippers.go:580]     Audit-Id: 4147c66a-e0cb-4c0c-afa7-be75a72cf255
	I1024 20:02:01.153400 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:01.650130 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:01.650152 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:01.650163 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:01.650170 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:01.652702 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:01.652723 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:01.652731 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:01 GMT
	I1024 20:02:01.652738 1181050 round_trippers.go:580]     Audit-Id: dca2b181-f0a8-4b8c-bed5-91863a8f1a56
	I1024 20:02:01.652744 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:01.652750 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:01.652757 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:01.652766 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:01.653039 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:02.150843 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:02.150882 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:02.150894 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:02.150902 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:02.153571 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:02.153590 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:02.153598 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:02.153605 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:02.153612 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:02 GMT
	I1024 20:02:02.153619 1181050 round_trippers.go:580]     Audit-Id: b1f8eac6-3f71-4020-9998-545c5359ed68
	I1024 20:02:02.153625 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:02.153643 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:02.153783 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:02.650164 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:02.650190 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:02.650200 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:02.650208 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:02.652725 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:02.652751 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:02.652762 1181050 round_trippers.go:580]     Audit-Id: 510a9bf6-b111-470a-b3b0-aff6bc4be3b4
	I1024 20:02:02.652768 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:02.652775 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:02.652781 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:02.652787 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:02.652797 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:02 GMT
	I1024 20:02:02.652924 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:03.151085 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:03.151112 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:03.151122 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:03.151129 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:03.153913 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:03.153940 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:03.153949 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:03.153956 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:03.153962 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:03.153969 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:03 GMT
	I1024 20:02:03.153976 1181050 round_trippers.go:580]     Audit-Id: 3c96cf44-aa5d-4bdb-b8cc-3063069ac8f4
	I1024 20:02:03.153983 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:03.154123 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:03.154552 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:03.650781 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:03.650806 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:03.650816 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:03.650823 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:03.653358 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:03.653381 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:03.653390 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:03.653396 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:03.653403 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:03 GMT
	I1024 20:02:03.653412 1181050 round_trippers.go:580]     Audit-Id: 7d1277c3-1566-4f9d-9eaa-d8b5cc5f79b0
	I1024 20:02:03.653418 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:03.653424 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:03.653532 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:04.150706 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:04.150730 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:04.150740 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:04.150748 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:04.153326 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:04.153355 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:04.153364 1181050 round_trippers.go:580]     Audit-Id: 0dbfc080-e136-4377-89d6-83b9bef9b4a1
	I1024 20:02:04.153371 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:04.153377 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:04.153384 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:04.153391 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:04.153397 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:04 GMT
	I1024 20:02:04.153817 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:04.650954 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:04.650984 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:04.650994 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:04.651002 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:04.653398 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:04.653425 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:04.653433 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:04.653439 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:04 GMT
	I1024 20:02:04.653446 1181050 round_trippers.go:580]     Audit-Id: 056e3ee1-1ec4-41b7-a0de-b7d20f99bffd
	I1024 20:02:04.653452 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:04.653462 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:04.653468 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:04.653758 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:05.150154 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:05.150181 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:05.150192 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:05.150200 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:05.152837 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:05.152860 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:05.152869 1181050 round_trippers.go:580]     Audit-Id: c9e5e5da-40af-4928-93a7-43920a0a5461
	I1024 20:02:05.152876 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:05.152882 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:05.152888 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:05.152895 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:05.152901 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:05 GMT
	I1024 20:02:05.153416 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:05.650230 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:05.650255 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:05.650266 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:05.650276 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:05.652781 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:05.652800 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:05.652809 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:05.652815 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:05.652822 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:05.652829 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:05 GMT
	I1024 20:02:05.652837 1181050 round_trippers.go:580]     Audit-Id: bc9cfec9-9144-4d22-ae1e-45db4415bc77
	I1024 20:02:05.652843 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:05.652959 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:05.653351 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:06.150122 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:06.150147 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:06.150157 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:06.150165 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:06.152739 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:06.152800 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:06.152834 1181050 round_trippers.go:580]     Audit-Id: 13d1de40-58d6-4f9c-998c-a1ccf08b3103
	I1024 20:02:06.152854 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:06.152868 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:06.152874 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:06.152881 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:06.152887 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:06 GMT
	I1024 20:02:06.153027 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:06.650775 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:06.650806 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:06.650816 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:06.650823 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:06.653575 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:06.653606 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:06.653618 1181050 round_trippers.go:580]     Audit-Id: a4fa4e5d-889a-457c-919e-34b134253251
	I1024 20:02:06.653625 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:06.653631 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:06.653637 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:06.653650 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:06.653661 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:06 GMT
	I1024 20:02:06.653785 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:07.150999 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:07.151024 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:07.151034 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:07.151049 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:07.153847 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:07.153902 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:07.153935 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:07.154009 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:07 GMT
	I1024 20:02:07.154029 1181050 round_trippers.go:580]     Audit-Id: c8b230a6-fa6c-49f7-85fb-4c4d2e151c47
	I1024 20:02:07.154042 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:07.154051 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:07.154057 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:07.154182 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:07.650707 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:07.650731 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:07.650741 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:07.650748 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:07.653333 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:07.653358 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:07.653367 1181050 round_trippers.go:580]     Audit-Id: 2730154d-1c56-4c3a-b285-f51c3524f165
	I1024 20:02:07.653375 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:07.653381 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:07.653395 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:07.653404 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:07.653419 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:07 GMT
	I1024 20:02:07.653531 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:07.653939 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:08.151060 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:08.151083 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:08.151093 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:08.151101 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:08.154253 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:08.154312 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:08.154322 1181050 round_trippers.go:580]     Audit-Id: 5f719c17-718a-4485-b241-a6ed38845920
	I1024 20:02:08.154329 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:08.154335 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:08.154351 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:08.154358 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:08.154370 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:08 GMT
	I1024 20:02:08.154489 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:08.650698 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:08.650722 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:08.650733 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:08.650740 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:08.653446 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:08.653466 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:08.653474 1181050 round_trippers.go:580]     Audit-Id: 7ac0c743-969c-42b2-b975-9de9488fa223
	I1024 20:02:08.653485 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:08.653492 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:08.653498 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:08.653508 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:08.653517 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:08 GMT
	I1024 20:02:08.653613 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:09.150954 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:09.150978 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:09.150990 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:09.151014 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:09.153549 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:09.153573 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:09.153582 1181050 round_trippers.go:580]     Audit-Id: c0fff12d-7112-4562-a050-a05117f34f6d
	I1024 20:02:09.153588 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:09.153594 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:09.153607 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:09.153616 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:09.153623 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:09 GMT
	I1024 20:02:09.153768 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:09.650206 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:09.650231 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:09.650241 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:09.650249 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:09.652884 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:09.652925 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:09.652934 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:09.652940 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:09 GMT
	I1024 20:02:09.652949 1181050 round_trippers.go:580]     Audit-Id: 74b8f0a1-b68a-44bd-a099-2eb20364118b
	I1024 20:02:09.652956 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:09.652962 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:09.652969 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:09.653105 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:10.150196 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:10.150222 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:10.150233 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:10.150240 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:10.152842 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:10.152866 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:10.152878 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:10.152884 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:10.152890 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:10.152896 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:10 GMT
	I1024 20:02:10.152902 1181050 round_trippers.go:580]     Audit-Id: 2e18e12d-fc0e-415b-99e0-996fb6a1a9ee
	I1024 20:02:10.152913 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:10.153180 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:10.153590 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:10.650279 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:10.650303 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:10.650313 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:10.650320 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:10.652900 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:10.652919 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:10.652927 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:10 GMT
	I1024 20:02:10.652934 1181050 round_trippers.go:580]     Audit-Id: 45a2997c-4785-423b-a922-db996993da37
	I1024 20:02:10.652940 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:10.652946 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:10.652952 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:10.652958 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:10.653083 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:11.150370 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:11.150394 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:11.150405 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:11.150413 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:11.152991 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:11.153020 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:11.153030 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:11.153039 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:11.153045 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:11.153052 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:11.153063 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:11 GMT
	I1024 20:02:11.153072 1181050 round_trippers.go:580]     Audit-Id: 01716166-068e-4d3f-8bef-de61a8d1c573
	I1024 20:02:11.153413 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:11.650310 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:11.650334 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:11.650344 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:11.650352 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:11.652820 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:11.652839 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:11.652847 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:11.652855 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:11.652862 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:11.652868 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:11.652875 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:11 GMT
	I1024 20:02:11.652881 1181050 round_trippers.go:580]     Audit-Id: 5e72b12c-0402-4edd-aa9e-2108ccd28a5c
	I1024 20:02:11.653030 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:12.150366 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:12.150391 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:12.150401 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:12.150409 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:12.153037 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:12.153056 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:12.153065 1181050 round_trippers.go:580]     Audit-Id: ae5550c4-4011-40c7-9e59-6a4d80ebe02c
	I1024 20:02:12.153072 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:12.153078 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:12.153085 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:12.153091 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:12.153097 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:12 GMT
	I1024 20:02:12.153250 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:12.153660 1181050 node_ready.go:58] node "multinode-773966" has status "Ready":"False"
	I1024 20:02:12.650795 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:12.650825 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:12.650835 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:12.650843 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:12.653328 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:12.653352 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:12.653361 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:12.653368 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:12 GMT
	I1024 20:02:12.653375 1181050 round_trippers.go:580]     Audit-Id: 360ec61d-e9a6-4539-a690-56810fbecd34
	I1024 20:02:12.653381 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:12.653391 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:12.653397 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:12.653678 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:13.150941 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:13.150963 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.150972 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.150980 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.153443 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:13.153463 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.153472 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.153478 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.153484 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.153491 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.153497 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.153504 1181050 round_trippers.go:580]     Audit-Id: 54fbc60a-bf7b-483b-ac05-e07ada3e4f44
	I1024 20:02:13.153767 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"300","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1024 20:02:13.650892 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:13.650916 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.650932 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.650943 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.653219 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:13.653238 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.653247 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.653253 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.653260 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.653266 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.653277 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.653284 1181050 round_trippers.go:580]     Audit-Id: a1a51562-f0d6-4395-9bf7-a8def5b1c911
	I1024 20:02:13.653792 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:13.654183 1181050 node_ready.go:49] node "multinode-773966" has status "Ready":"True"
	I1024 20:02:13.654200 1181050 node_ready.go:38] duration metric: took 31.605229032s waiting for node "multinode-773966" to be "Ready" ...
	I1024 20:02:13.654210 1181050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:02:13.654279 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:02:13.654289 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.654297 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.654304 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.657518 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:13.657537 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.657545 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.657551 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.657557 1181050 round_trippers.go:580]     Audit-Id: 1de30b29-a455-4c6d-9d5c-46d2a15cec7e
	I1024 20:02:13.657566 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.657575 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.657581 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.658894 1181050 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"397"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"395","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1024 20:02:13.662773 1181050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:13.662866 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xxljp
	I1024 20:02:13.662877 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.662887 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.662893 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.665202 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:13.665221 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.665242 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.665252 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.665261 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.665267 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.665274 1181050 round_trippers.go:580]     Audit-Id: e660757f-4a40-4a45-aadf-0e989247a22e
	I1024 20:02:13.665283 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.665865 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"395","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 20:02:13.666350 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:13.666367 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.666376 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.666383 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.668486 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:13.668505 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.668513 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.668520 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.668526 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.668532 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.668541 1181050 round_trippers.go:580]     Audit-Id: 7ddcea37-ab30-46ee-aab3-1eeeb3b20381
	I1024 20:02:13.668550 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.668747 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:13.669160 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xxljp
	I1024 20:02:13.669177 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.669198 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.669210 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.674595 1181050 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 20:02:13.674615 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.674623 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.674630 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.674637 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.674644 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.674653 1181050 round_trippers.go:580]     Audit-Id: 4dd3fc83-ab2b-458e-bf69-dffd4e8e4754
	I1024 20:02:13.674662 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.674847 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"395","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 20:02:13.675334 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:13.675351 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:13.675360 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:13.675368 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:13.677371 1181050 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 20:02:13.677390 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:13.677397 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:13 GMT
	I1024 20:02:13.677404 1181050 round_trippers.go:580]     Audit-Id: 1be51ea6-0728-4084-91b8-ad1595c2d76f
	I1024 20:02:13.677410 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:13.677416 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:13.677426 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:13.677435 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:13.677675 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.178348 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xxljp
	I1024 20:02:14.178375 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.178385 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.178392 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.182255 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:14.182281 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.182290 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.182296 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.182302 1181050 round_trippers.go:580]     Audit-Id: d4eb0dcd-0b5f-4cf6-886d-a1f40c7c1494
	I1024 20:02:14.182309 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.182315 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.182327 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.182449 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"395","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1024 20:02:14.182961 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.182977 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.182986 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.182993 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.188344 1181050 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1024 20:02:14.188365 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.188373 1181050 round_trippers.go:580]     Audit-Id: 5db7f569-13e6-4c19-9c2d-675829714fba
	I1024 20:02:14.188380 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.188386 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.188393 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.188401 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.188415 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.188960 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.678622 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xxljp
	I1024 20:02:14.678645 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.678655 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.678662 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.681177 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.681194 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.681203 1181050 round_trippers.go:580]     Audit-Id: c0823fe8-7b79-47a2-ac3e-3a33bfab7b4d
	I1024 20:02:14.681209 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.681216 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.681222 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.681229 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.681235 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.681592 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"407","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1024 20:02:14.682138 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.682155 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.682163 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.682170 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.684425 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.684443 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.684451 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.684458 1181050 round_trippers.go:580]     Audit-Id: 98bac82a-aaf9-425f-885f-7be5dc329cce
	I1024 20:02:14.684464 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.684474 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.684482 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.684489 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.684636 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.685015 1181050 pod_ready.go:92] pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:14.685033 1181050 pod_ready.go:81] duration metric: took 1.0222271s waiting for pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.685043 1181050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.685109 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773966
	I1024 20:02:14.685118 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.685125 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.685132 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.687386 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.687433 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.687449 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.687457 1181050 round_trippers.go:580]     Audit-Id: bf842a71-3e5a-4c07-84bd-0aa2f757a5a2
	I1024 20:02:14.687464 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.687470 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.687476 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.687485 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.687861 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773966","namespace":"kube-system","uid":"6d702ec5-2b3a-460f-83bd-afe267c6e11a","resourceVersion":"380","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"883e2738bfd207cffca852790a091db1","kubernetes.io/config.mirror":"883e2738bfd207cffca852790a091db1","kubernetes.io/config.seen":"2023-10-24T20:01:29.175728694Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1024 20:02:14.688300 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.688318 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.688327 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.688335 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.690399 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.690420 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.690428 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.690434 1181050 round_trippers.go:580]     Audit-Id: 6765ad7f-41f9-4981-b774-3d39bb171724
	I1024 20:02:14.690440 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.690447 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.690453 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.690464 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.690816 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.691204 1181050 pod_ready.go:92] pod "etcd-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:14.691221 1181050 pod_ready.go:81] duration metric: took 6.171167ms waiting for pod "etcd-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.691235 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.691294 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773966
	I1024 20:02:14.691305 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.691312 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.691321 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.693433 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.693453 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.693461 1181050 round_trippers.go:580]     Audit-Id: d29456fa-63e9-46bd-b8de-a1a5244e8128
	I1024 20:02:14.693467 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.693473 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.693480 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.693487 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.693497 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.693754 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773966","namespace":"kube-system","uid":"b2bdeafa-2435-4a3a-ac17-6ce1c060ac88","resourceVersion":"381","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e97211f0bb5112c2116bdaec5410f7ba","kubernetes.io/config.mirror":"e97211f0bb5112c2116bdaec5410f7ba","kubernetes.io/config.seen":"2023-10-24T20:01:29.175734093Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1024 20:02:14.694250 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.694305 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.694322 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.694330 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.696392 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.696409 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.696418 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.696424 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.696430 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.696436 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.696445 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.696458 1181050 round_trippers.go:580]     Audit-Id: e672cea2-57bd-4840-b26d-7bd7978f7fd8
	I1024 20:02:14.696673 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.697071 1181050 pod_ready.go:92] pod "kube-apiserver-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:14.697086 1181050 pod_ready.go:81] duration metric: took 5.841322ms waiting for pod "kube-apiserver-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.697097 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.697159 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773966
	I1024 20:02:14.697169 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.697177 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.697184 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.699292 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.699315 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.699323 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.699330 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.699336 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.699350 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.699361 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.699368 1181050 round_trippers.go:580]     Audit-Id: 46b9549e-5089-42ea-89f2-b27075a35fb6
	I1024 20:02:14.699541 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773966","namespace":"kube-system","uid":"36ab85e7-0c8e-4da4-940a-428d743184e0","resourceVersion":"310","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"95f71f86968dd4700c51541369b0c606","kubernetes.io/config.mirror":"95f71f86968dd4700c51541369b0c606","kubernetes.io/config.seen":"2023-10-24T20:01:29.175735496Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1024 20:02:14.851374 1181050 request.go:629] Waited for 151.280332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.851457 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:14.851468 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:14.851477 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:14.851484 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:14.853938 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:14.853961 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:14.853969 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:14 GMT
	I1024 20:02:14.853976 1181050 round_trippers.go:580]     Audit-Id: 3d76a689-4721-4944-b2e7-42c96504c539
	I1024 20:02:14.854012 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:14.854024 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:14.854030 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:14.854038 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:14.854166 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:14.854557 1181050 pod_ready.go:92] pod "kube-controller-manager-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:14.854574 1181050 pod_ready.go:81] duration metric: took 157.465718ms waiting for pod "kube-controller-manager-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:14.854585 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsvnn" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:15.051853 1181050 request.go:629] Waited for 197.197482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsvnn
	I1024 20:02:15.051980 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsvnn
	I1024 20:02:15.051994 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.052004 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.052012 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.054759 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:15.054836 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.054859 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.054878 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.054955 1181050 round_trippers.go:580]     Audit-Id: aa49f837-5f8e-44e1-aeba-f110ab4b226f
	I1024 20:02:15.054975 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.054982 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.054988 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.055121 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsvnn","generateName":"kube-proxy-","namespace":"kube-system","uid":"99e468ec-c444-4fbf-8a1c-97bd7c654075","resourceVersion":"374","creationTimestamp":"2023-10-24T20:01:41Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"438118bc-681e-453e-be1a-d33418e8630d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"438118bc-681e-453e-be1a-d33418e8630d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1024 20:02:15.250946 1181050 request.go:629] Waited for 195.283832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:15.251007 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:15.251014 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.251023 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.251030 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.253621 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:15.253676 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.253684 1181050 round_trippers.go:580]     Audit-Id: 7fb9af03-c3a7-4ba0-9bc9-80cd793a874a
	I1024 20:02:15.253695 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.253702 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.253713 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.253726 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.253752 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.253866 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:15.254252 1181050 pod_ready.go:92] pod "kube-proxy-jsvnn" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:15.254269 1181050 pod_ready.go:81] duration metric: took 399.674081ms waiting for pod "kube-proxy-jsvnn" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:15.254281 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:15.451691 1181050 request.go:629] Waited for 197.335959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773966
	I1024 20:02:15.451752 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773966
	I1024 20:02:15.451776 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.451785 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.451792 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.454324 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:15.454346 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.454354 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.454361 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.454367 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.454373 1181050 round_trippers.go:580]     Audit-Id: c9e44dae-a55a-4e8c-81ed-451accfc6934
	I1024 20:02:15.454383 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.454389 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.454633 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773966","namespace":"kube-system","uid":"0c4eebae-6ace-4cee-ba2c-72360a106163","resourceVersion":"379","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"daf0428413c67a76aa8986cb2e700828","kubernetes.io/config.mirror":"daf0428413c67a76aa8986cb2e700828","kubernetes.io/config.seen":"2023-10-24T20:01:29.175736800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1024 20:02:15.651427 1181050 request.go:629] Waited for 196.33474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:15.651517 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:02:15.651527 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.651536 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.651542 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.654101 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:15.654215 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.654250 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.654259 1181050 round_trippers.go:580]     Audit-Id: bd78ad24-d536-4fdd-bca9-b299eca4201b
	I1024 20:02:15.654268 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.654274 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.654284 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.654294 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.654400 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:02:15.654796 1181050 pod_ready.go:92] pod "kube-scheduler-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:02:15.654814 1181050 pod_ready.go:81] duration metric: took 400.516063ms waiting for pod "kube-scheduler-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:02:15.654827 1181050 pod_ready.go:38] duration metric: took 2.000602255s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:02:15.654846 1181050 api_server.go:52] waiting for apiserver process to appear ...
	I1024 20:02:15.654904 1181050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:02:15.667760 1181050 command_runner.go:130] > 1261
	I1024 20:02:15.667793 1181050 api_server.go:72] duration metric: took 33.696646409s to wait for apiserver process to appear ...
	I1024 20:02:15.667818 1181050 api_server.go:88] waiting for apiserver healthz status ...
	I1024 20:02:15.667833 1181050 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1024 20:02:15.677837 1181050 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1024 20:02:15.677903 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1024 20:02:15.677909 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.677918 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.677925 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.679226 1181050 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1024 20:02:15.679274 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.679296 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.679310 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.679328 1181050 round_trippers.go:580]     Content-Length: 264
	I1024 20:02:15.679335 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.679342 1181050 round_trippers.go:580]     Audit-Id: faed2051-95ca-47c3-892b-d4442f2d1c9e
	I1024 20:02:15.679353 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.679359 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.679378 1181050 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1024 20:02:15.679472 1181050 api_server.go:141] control plane version: v1.28.3
	I1024 20:02:15.679502 1181050 api_server.go:131] duration metric: took 11.674965ms to wait for apiserver health ...
	I1024 20:02:15.679514 1181050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 20:02:15.851863 1181050 request.go:629] Waited for 172.285162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:02:15.851943 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:02:15.851949 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:15.851957 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:15.851964 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:15.855239 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:15.855387 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:15.855424 1181050 round_trippers.go:580]     Audit-Id: 6b8d8596-c93b-4979-a6ac-851a37fd6fe6
	I1024 20:02:15.855447 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:15.855466 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:15.855485 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:15.855504 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:15.855553 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:15 GMT
	I1024 20:02:15.856014 1181050 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"407","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1024 20:02:15.858415 1181050 system_pods.go:59] 8 kube-system pods found
	I1024 20:02:15.858443 1181050 system_pods.go:61] "coredns-5dd5756b68-xxljp" [c3ba8ac1-f91f-4620-a22c-cd8946cd3a43] Running
	I1024 20:02:15.858450 1181050 system_pods.go:61] "etcd-multinode-773966" [6d702ec5-2b3a-460f-83bd-afe267c6e11a] Running
	I1024 20:02:15.858455 1181050 system_pods.go:61] "kindnet-drz9j" [8217bced-7146-429e-a09d-edf6e3891335] Running
	I1024 20:02:15.858461 1181050 system_pods.go:61] "kube-apiserver-multinode-773966" [b2bdeafa-2435-4a3a-ac17-6ce1c060ac88] Running
	I1024 20:02:15.858496 1181050 system_pods.go:61] "kube-controller-manager-multinode-773966" [36ab85e7-0c8e-4da4-940a-428d743184e0] Running
	I1024 20:02:15.858508 1181050 system_pods.go:61] "kube-proxy-jsvnn" [99e468ec-c444-4fbf-8a1c-97bd7c654075] Running
	I1024 20:02:15.858513 1181050 system_pods.go:61] "kube-scheduler-multinode-773966" [0c4eebae-6ace-4cee-ba2c-72360a106163] Running
	I1024 20:02:15.858518 1181050 system_pods.go:61] "storage-provisioner" [c67e72f3-94d0-4f1a-9a78-a7e5d344adae] Running
	I1024 20:02:15.858524 1181050 system_pods.go:74] duration metric: took 179.004684ms to wait for pod list to return data ...
	I1024 20:02:15.858537 1181050 default_sa.go:34] waiting for default service account to be created ...
	I1024 20:02:16.050931 1181050 request.go:629] Waited for 192.288603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 20:02:16.051028 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1024 20:02:16.051039 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:16.051048 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:16.051056 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:16.053651 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:16.053674 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:16.053686 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:16.053700 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:16.053711 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:16.053726 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:16.053751 1181050 round_trippers.go:580]     Content-Length: 261
	I1024 20:02:16.053763 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:16 GMT
	I1024 20:02:16.053772 1181050 round_trippers.go:580]     Audit-Id: ca11d909-9e33-4034-8ebf-fa7c3b71b1f3
	I1024 20:02:16.053806 1181050 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"411"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f3eaac50-5a03-4d88-ab5d-cb88270b6204","resourceVersion":"313","creationTimestamp":"2023-10-24T20:01:41Z"}}]}
	I1024 20:02:16.054054 1181050 default_sa.go:45] found service account: "default"
	I1024 20:02:16.054076 1181050 default_sa.go:55] duration metric: took 195.532396ms for default service account to be created ...
	I1024 20:02:16.054089 1181050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 20:02:16.251132 1181050 request.go:629] Waited for 196.967427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:02:16.251206 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:02:16.251217 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:16.251226 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:16.251236 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:16.254608 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:16.254674 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:16.254691 1181050 round_trippers.go:580]     Audit-Id: 74fb0191-0672-4ca4-8791-8d2f843d330d
	I1024 20:02:16.254698 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:16.254705 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:16.254711 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:16.254717 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:16.254728 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:16 GMT
	I1024 20:02:16.255503 1181050 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"407","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1024 20:02:16.257831 1181050 system_pods.go:86] 8 kube-system pods found
	I1024 20:02:16.257857 1181050 system_pods.go:89] "coredns-5dd5756b68-xxljp" [c3ba8ac1-f91f-4620-a22c-cd8946cd3a43] Running
	I1024 20:02:16.257866 1181050 system_pods.go:89] "etcd-multinode-773966" [6d702ec5-2b3a-460f-83bd-afe267c6e11a] Running
	I1024 20:02:16.257871 1181050 system_pods.go:89] "kindnet-drz9j" [8217bced-7146-429e-a09d-edf6e3891335] Running
	I1024 20:02:16.257877 1181050 system_pods.go:89] "kube-apiserver-multinode-773966" [b2bdeafa-2435-4a3a-ac17-6ce1c060ac88] Running
	I1024 20:02:16.257884 1181050 system_pods.go:89] "kube-controller-manager-multinode-773966" [36ab85e7-0c8e-4da4-940a-428d743184e0] Running
	I1024 20:02:16.257924 1181050 system_pods.go:89] "kube-proxy-jsvnn" [99e468ec-c444-4fbf-8a1c-97bd7c654075] Running
	I1024 20:02:16.257935 1181050 system_pods.go:89] "kube-scheduler-multinode-773966" [0c4eebae-6ace-4cee-ba2c-72360a106163] Running
	I1024 20:02:16.257940 1181050 system_pods.go:89] "storage-provisioner" [c67e72f3-94d0-4f1a-9a78-a7e5d344adae] Running
	I1024 20:02:16.257948 1181050 system_pods.go:126] duration metric: took 203.853439ms to wait for k8s-apps to be running ...
	I1024 20:02:16.257959 1181050 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:02:16.258026 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:02:16.271389 1181050 system_svc.go:56] duration metric: took 13.417752ms WaitForService to wait for kubelet.
	I1024 20:02:16.271416 1181050 kubeadm.go:581] duration metric: took 34.300269984s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:02:16.271435 1181050 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:02:16.451836 1181050 request.go:629] Waited for 180.294616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1024 20:02:16.451897 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1024 20:02:16.451903 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:16.451912 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:16.451923 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:16.454395 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:16.454456 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:16.454478 1181050 round_trippers.go:580]     Audit-Id: 49416bb6-1db4-45f1-9064-39c6b9c65645
	I1024 20:02:16.454497 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:16.454531 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:16.454556 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:16.454575 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:16.454594 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:16 GMT
	I1024 20:02:16.454773 1181050 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1024 20:02:16.455225 1181050 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 20:02:16.455250 1181050 node_conditions.go:123] node cpu capacity is 2
	I1024 20:02:16.455262 1181050 node_conditions.go:105] duration metric: took 183.822658ms to run NodePressure ...
	I1024 20:02:16.455276 1181050 start.go:228] waiting for startup goroutines ...
	I1024 20:02:16.455292 1181050 start.go:233] waiting for cluster config update ...
	I1024 20:02:16.455303 1181050 start.go:242] writing updated cluster config ...
	I1024 20:02:16.457727 1181050 out.go:177] 
	I1024 20:02:16.460096 1181050 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:02:16.460184 1181050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json ...
	I1024 20:02:16.462900 1181050 out.go:177] * Starting worker node multinode-773966-m02 in cluster multinode-773966
	I1024 20:02:16.464774 1181050 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 20:02:16.466953 1181050 out.go:177] * Pulling base image ...
	I1024 20:02:16.468913 1181050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:02:16.468936 1181050 cache.go:57] Caching tarball of preloaded images
	I1024 20:02:16.468994 1181050 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 20:02:16.469032 1181050 preload.go:174] Found /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1024 20:02:16.469048 1181050 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1024 20:02:16.469181 1181050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json ...
	I1024 20:02:16.487179 1181050 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon, skipping pull
	I1024 20:02:16.487204 1181050 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in daemon, skipping load
	I1024 20:02:16.487224 1181050 cache.go:195] Successfully downloaded all kic artifacts
	I1024 20:02:16.487253 1181050 start.go:365] acquiring machines lock for multinode-773966-m02: {Name:mkd5bd63c77b9bd01c8934d6b5abe31ee06de07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:02:16.487363 1181050 start.go:369] acquired machines lock for "multinode-773966-m02" in 93.62µs
	I1024 20:02:16.487394 1181050 start.go:93] Provisioning new machine with config: &{Name:multinode-773966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 20:02:16.487481 1181050 start.go:125] createHost starting for "m02" (driver="docker")
	I1024 20:02:16.491601 1181050 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 20:02:16.491718 1181050 start.go:159] libmachine.API.Create for "multinode-773966" (driver="docker")
	I1024 20:02:16.491743 1181050 client.go:168] LocalClient.Create starting
	I1024 20:02:16.491805 1181050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 20:02:16.491842 1181050 main.go:141] libmachine: Decoding PEM data...
	I1024 20:02:16.491861 1181050 main.go:141] libmachine: Parsing certificate...
	I1024 20:02:16.491919 1181050 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 20:02:16.491944 1181050 main.go:141] libmachine: Decoding PEM data...
	I1024 20:02:16.491958 1181050 main.go:141] libmachine: Parsing certificate...
	I1024 20:02:16.492207 1181050 cli_runner.go:164] Run: docker network inspect multinode-773966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:02:16.510510 1181050 network_create.go:77] Found existing network {name:multinode-773966 subnet:0x40027f7530 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1024 20:02:16.510559 1181050 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-773966-m02" container
	I1024 20:02:16.510630 1181050 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 20:02:16.534828 1181050 cli_runner.go:164] Run: docker volume create multinode-773966-m02 --label name.minikube.sigs.k8s.io=multinode-773966-m02 --label created_by.minikube.sigs.k8s.io=true
	I1024 20:02:16.558493 1181050 oci.go:103] Successfully created a docker volume multinode-773966-m02
	I1024 20:02:16.558576 1181050 cli_runner.go:164] Run: docker run --rm --name multinode-773966-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-773966-m02 --entrypoint /usr/bin/test -v multinode-773966-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -d /var/lib
	I1024 20:02:17.134109 1181050 oci.go:107] Successfully prepared a docker volume multinode-773966-m02
	I1024 20:02:17.134147 1181050 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 20:02:17.134168 1181050 kic.go:191] Starting extracting preloaded images to volume ...
	I1024 20:02:17.134251 1181050 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-773966-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir
	I1024 20:02:21.482570 1181050 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-773966-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 -I lz4 -xf /preloaded.tar -C /extractDir: (4.348271776s)
	I1024 20:02:21.482605 1181050 kic.go:200] duration metric: took 4.348433 seconds to extract preloaded images to volume
	W1024 20:02:21.482741 1181050 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 20:02:21.482852 1181050 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 20:02:21.562871 1181050 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-773966-m02 --name multinode-773966-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-773966-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-773966-m02 --network multinode-773966 --ip 192.168.58.3 --volume multinode-773966-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 20:02:21.947660 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Running}}
	I1024 20:02:21.976581 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Status}}
	I1024 20:02:22.007865 1181050 cli_runner.go:164] Run: docker exec multinode-773966-m02 stat /var/lib/dpkg/alternatives/iptables
	I1024 20:02:22.063441 1181050 oci.go:144] the created container "multinode-773966-m02" has a running status.
	I1024 20:02:22.063467 1181050 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa...
	I1024 20:02:22.245899 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1024 20:02:22.245987 1181050 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 20:02:22.275188 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Status}}
	I1024 20:02:22.309923 1181050 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 20:02:22.309942 1181050 kic_runner.go:114] Args: [docker exec --privileged multinode-773966-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 20:02:22.395482 1181050 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Status}}
	I1024 20:02:22.417045 1181050 machine.go:88] provisioning docker machine ...
	I1024 20:02:22.417074 1181050 ubuntu.go:169] provisioning hostname "multinode-773966-m02"
	I1024 20:02:22.417134 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:22.445715 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:02:22.446159 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34290 <nil> <nil>}
	I1024 20:02:22.446179 1181050 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-773966-m02 && echo "multinode-773966-m02" | sudo tee /etc/hostname
	I1024 20:02:22.446808 1181050 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1024 20:02:25.605782 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-773966-m02
	
	I1024 20:02:25.605875 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:25.625779 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:02:25.626198 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34290 <nil> <nil>}
	I1024 20:02:25.626220 1181050 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-773966-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-773966-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-773966-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:02:25.762960 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:02:25.762986 1181050 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:02:25.763002 1181050 ubuntu.go:177] setting up certificates
	I1024 20:02:25.763014 1181050 provision.go:83] configureAuth start
	I1024 20:02:25.763074 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966-m02
	I1024 20:02:25.780908 1181050 provision.go:138] copyHostCerts
	I1024 20:02:25.780947 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:02:25.780986 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:02:25.780993 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:02:25.781070 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:02:25.781165 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:02:25.781182 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:02:25.781186 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:02:25.781211 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:02:25.781250 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:02:25.781264 1181050 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:02:25.781268 1181050 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:02:25.781289 1181050 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:02:25.781329 1181050 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.multinode-773966-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-773966-m02]
	I1024 20:02:26.511011 1181050 provision.go:172] copyRemoteCerts
	I1024 20:02:26.511078 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:02:26.511128 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:26.535231 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:02:26.637160 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1024 20:02:26.637221 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:02:26.666766 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1024 20:02:26.666833 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1024 20:02:26.697583 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1024 20:02:26.697689 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:02:26.728188 1181050 provision.go:86] duration metric: configureAuth took 965.159343ms
	I1024 20:02:26.728218 1181050 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:02:26.728412 1181050 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:02:26.728515 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:26.747069 1181050 main.go:141] libmachine: Using SSH client type: native
	I1024 20:02:26.747491 1181050 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34290 <nil> <nil>}
	I1024 20:02:26.747513 1181050 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:02:27.003868 1181050 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:02:27.003901 1181050 machine.go:91] provisioned docker machine in 4.586837474s
	I1024 20:02:27.003920 1181050 client.go:171] LocalClient.Create took 10.51216309s
	I1024 20:02:27.003932 1181050 start.go:167] duration metric: libmachine.API.Create for "multinode-773966" took 10.512214265s
	I1024 20:02:27.003940 1181050 start.go:300] post-start starting for "multinode-773966-m02" (driver="docker")
	I1024 20:02:27.003949 1181050 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:02:27.004021 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:02:27.004062 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:27.029168 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:02:27.133807 1181050 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:02:27.138036 1181050 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1024 20:02:27.138061 1181050 command_runner.go:130] > NAME="Ubuntu"
	I1024 20:02:27.138068 1181050 command_runner.go:130] > VERSION_ID="22.04"
	I1024 20:02:27.138079 1181050 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1024 20:02:27.138090 1181050 command_runner.go:130] > VERSION_CODENAME=jammy
	I1024 20:02:27.138095 1181050 command_runner.go:130] > ID=ubuntu
	I1024 20:02:27.138105 1181050 command_runner.go:130] > ID_LIKE=debian
	I1024 20:02:27.138111 1181050 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1024 20:02:27.138117 1181050 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1024 20:02:27.138125 1181050 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1024 20:02:27.138133 1181050 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1024 20:02:27.138150 1181050 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1024 20:02:27.138204 1181050 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:02:27.138235 1181050 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:02:27.138250 1181050 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:02:27.138264 1181050 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1024 20:02:27.138274 1181050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:02:27.138353 1181050 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:02:27.138449 1181050 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:02:27.138456 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /etc/ssl/certs/11176342.pem
	I1024 20:02:27.138589 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:02:27.149807 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:02:27.177920 1181050 start.go:303] post-start completed in 173.964661ms
	I1024 20:02:27.178347 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966-m02
	I1024 20:02:27.198793 1181050 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/config.json ...
	I1024 20:02:27.199086 1181050 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:02:27.199151 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:27.217589 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:02:27.311722 1181050 command_runner.go:130] > 11%!
	(MISSING)I1024 20:02:27.311794 1181050 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:02:27.317216 1181050 command_runner.go:130] > 173G
	I1024 20:02:27.317583 1181050 start.go:128] duration metric: createHost completed in 10.83008838s
	I1024 20:02:27.317603 1181050 start.go:83] releasing machines lock for "multinode-773966-m02", held for 10.830225964s
	I1024 20:02:27.317673 1181050 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966-m02
	I1024 20:02:27.342306 1181050 out.go:177] * Found network options:
	I1024 20:02:27.344160 1181050 out.go:177]   - NO_PROXY=192.168.58.2
	W1024 20:02:27.346070 1181050 proxy.go:119] fail to check proxy env: Error ip not in block
	W1024 20:02:27.346108 1181050 proxy.go:119] fail to check proxy env: Error ip not in block
	I1024 20:02:27.346178 1181050 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:02:27.346222 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:27.346471 1181050 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:02:27.346525 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:02:27.369881 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:02:27.377865 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:02:27.615991 1181050 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1024 20:02:27.616008 1181050 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 20:02:27.621224 1181050 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1024 20:02:27.621251 1181050 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1024 20:02:27.621260 1181050 command_runner.go:130] > Device: b3h/179d	Inode: 1569408     Links: 1
	I1024 20:02:27.621267 1181050 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:02:27.621274 1181050 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1024 20:02:27.621281 1181050 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1024 20:02:27.621287 1181050 command_runner.go:130] > Change: 2023-10-24 19:23:59.409159037 +0000
	I1024 20:02:27.621294 1181050 command_runner.go:130] >  Birth: 2023-10-24 19:23:59.409159037 +0000
	I1024 20:02:27.621548 1181050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:02:27.645566 1181050 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 20:02:27.645645 1181050 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:02:27.685560 1181050 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1024 20:02:27.685616 1181050 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1024 20:02:27.685627 1181050 start.go:472] detecting cgroup driver to use...
	I1024 20:02:27.685661 1181050 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 20:02:27.685718 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:02:27.703851 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:02:27.718069 1181050 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:02:27.718137 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:02:27.734073 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:02:27.750743 1181050 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1024 20:02:27.853559 1181050 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:02:27.952560 1181050 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1024 20:02:27.952592 1181050 docker.go:214] disabling docker service ...
	I1024 20:02:27.952652 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:02:27.975131 1181050 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:02:27.988521 1181050 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:02:28.081858 1181050 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1024 20:02:28.081973 1181050 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:02:28.178208 1181050 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1024 20:02:28.178300 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:02:28.192426 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:02:28.210220 1181050 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1024 20:02:28.211351 1181050 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1024 20:02:28.211447 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:02:28.223929 1181050 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1024 20:02:28.224066 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:02:28.235457 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:02:28.247024 1181050 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:02:28.259704 1181050 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 20:02:28.270464 1181050 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 20:02:28.279539 1181050 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1024 20:02:28.280530 1181050 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 20:02:28.290774 1181050 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 20:02:28.393623 1181050 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1024 20:02:28.532685 1181050 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1024 20:02:28.532756 1181050 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1024 20:02:28.537072 1181050 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1024 20:02:28.537093 1181050 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1024 20:02:28.537102 1181050 command_runner.go:130] > Device: bdh/189d	Inode: 190         Links: 1
	I1024 20:02:28.537110 1181050 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:02:28.537116 1181050 command_runner.go:130] > Access: 2023-10-24 20:02:28.506951158 +0000
	I1024 20:02:28.537123 1181050 command_runner.go:130] > Modify: 2023-10-24 20:02:28.506951158 +0000
	I1024 20:02:28.537130 1181050 command_runner.go:130] > Change: 2023-10-24 20:02:28.506951158 +0000
	I1024 20:02:28.537137 1181050 command_runner.go:130] >  Birth: -
	I1024 20:02:28.537719 1181050 start.go:540] Will wait 60s for crictl version
	I1024 20:02:28.537807 1181050 ssh_runner.go:195] Run: which crictl
	I1024 20:02:28.541823 1181050 command_runner.go:130] > /usr/bin/crictl
	I1024 20:02:28.542449 1181050 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 20:02:28.589872 1181050 command_runner.go:130] > Version:  0.1.0
	I1024 20:02:28.589897 1181050 command_runner.go:130] > RuntimeName:  cri-o
	I1024 20:02:28.589904 1181050 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1024 20:02:28.589914 1181050 command_runner.go:130] > RuntimeApiVersion:  v1
	I1024 20:02:28.589925 1181050 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1024 20:02:28.589996 1181050 ssh_runner.go:195] Run: crio --version
	I1024 20:02:28.633229 1181050 command_runner.go:130] > crio version 1.24.6
	I1024 20:02:28.633251 1181050 command_runner.go:130] > Version:          1.24.6
	I1024 20:02:28.633261 1181050 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 20:02:28.633274 1181050 command_runner.go:130] > GitTreeState:     clean
	I1024 20:02:28.633287 1181050 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 20:02:28.633296 1181050 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 20:02:28.633301 1181050 command_runner.go:130] > Compiler:         gc
	I1024 20:02:28.633310 1181050 command_runner.go:130] > Platform:         linux/arm64
	I1024 20:02:28.633320 1181050 command_runner.go:130] > Linkmode:         dynamic
	I1024 20:02:28.633332 1181050 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 20:02:28.633338 1181050 command_runner.go:130] > SeccompEnabled:   true
	I1024 20:02:28.633343 1181050 command_runner.go:130] > AppArmorEnabled:  false
	I1024 20:02:28.635421 1181050 ssh_runner.go:195] Run: crio --version
	I1024 20:02:28.682553 1181050 command_runner.go:130] > crio version 1.24.6
	I1024 20:02:28.682573 1181050 command_runner.go:130] > Version:          1.24.6
	I1024 20:02:28.682582 1181050 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1024 20:02:28.682588 1181050 command_runner.go:130] > GitTreeState:     clean
	I1024 20:02:28.682596 1181050 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1024 20:02:28.682601 1181050 command_runner.go:130] > GoVersion:        go1.18.2
	I1024 20:02:28.682607 1181050 command_runner.go:130] > Compiler:         gc
	I1024 20:02:28.682613 1181050 command_runner.go:130] > Platform:         linux/arm64
	I1024 20:02:28.682619 1181050 command_runner.go:130] > Linkmode:         dynamic
	I1024 20:02:28.682648 1181050 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1024 20:02:28.682656 1181050 command_runner.go:130] > SeccompEnabled:   true
	I1024 20:02:28.682662 1181050 command_runner.go:130] > AppArmorEnabled:  false
	I1024 20:02:28.686885 1181050 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1024 20:02:28.689150 1181050 out.go:177]   - env NO_PROXY=192.168.58.2
	I1024 20:02:28.691003 1181050 cli_runner.go:164] Run: docker network inspect multinode-773966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:02:28.710853 1181050 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1024 20:02:28.715358 1181050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:02:28.728514 1181050 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966 for IP: 192.168.58.3
	I1024 20:02:28.728543 1181050 certs.go:190] acquiring lock for shared ca certs: {Name:mka7b9c27527bac3ad97e94531dcdc2bc2059d68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 20:02:28.728670 1181050 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key
	I1024 20:02:28.728712 1181050 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key
	I1024 20:02:28.728723 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1024 20:02:28.728735 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1024 20:02:28.728748 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1024 20:02:28.728758 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1024 20:02:28.728807 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem (1338 bytes)
	W1024 20:02:28.728837 1181050 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634_empty.pem, impossibly tiny 0 bytes
	I1024 20:02:28.728846 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem (1675 bytes)
	I1024 20:02:28.728874 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem (1082 bytes)
	I1024 20:02:28.728899 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem (1123 bytes)
	I1024 20:02:28.728921 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem (1675 bytes)
	I1024 20:02:28.728966 1181050 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:02:28.728994 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:02:28.729005 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem -> /usr/share/ca-certificates/1117634.pem
	I1024 20:02:28.729015 1181050 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> /usr/share/ca-certificates/11176342.pem
	I1024 20:02:28.729342 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 20:02:28.756950 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1024 20:02:28.784311 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 20:02:28.813466 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 20:02:28.841580 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 20:02:28.869841 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/1117634.pem --> /usr/share/ca-certificates/1117634.pem (1338 bytes)
	I1024 20:02:28.898040 1181050 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /usr/share/ca-certificates/11176342.pem (1708 bytes)
	I1024 20:02:28.926718 1181050 ssh_runner.go:195] Run: openssl version
	I1024 20:02:28.933184 1181050 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1024 20:02:28.933553 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 20:02:28.945171 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:02:28.949482 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:02:28.949777 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:02:28.949837 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 20:02:28.957925 1181050 command_runner.go:130] > b5213941
	I1024 20:02:28.958309 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 20:02:28.969656 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1117634.pem && ln -fs /usr/share/ca-certificates/1117634.pem /etc/ssl/certs/1117634.pem"
	I1024 20:02:28.981510 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1117634.pem
	I1024 20:02:28.985882 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 20:02:28.986182 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:36 /usr/share/ca-certificates/1117634.pem
	I1024 20:02:28.986263 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1117634.pem
	I1024 20:02:28.994262 1181050 command_runner.go:130] > 51391683
	I1024 20:02:28.994696 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1117634.pem /etc/ssl/certs/51391683.0"
	I1024 20:02:29.006406 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11176342.pem && ln -fs /usr/share/ca-certificates/11176342.pem /etc/ssl/certs/11176342.pem"
	I1024 20:02:29.018075 1181050 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11176342.pem
	I1024 20:02:29.022710 1181050 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 20:02:29.023012 1181050 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:36 /usr/share/ca-certificates/11176342.pem
	I1024 20:02:29.023114 1181050 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11176342.pem
	I1024 20:02:29.031176 1181050 command_runner.go:130] > 3ec20f2e
	I1024 20:02:29.031564 1181050 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11176342.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 20:02:29.043111 1181050 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 20:02:29.047496 1181050 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 20:02:29.047541 1181050 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1024 20:02:29.047650 1181050 ssh_runner.go:195] Run: crio config
	I1024 20:02:29.099380 1181050 command_runner.go:130] ! time="2023-10-24 20:02:29.099037277Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1024 20:02:29.099631 1181050 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1024 20:02:29.105126 1181050 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1024 20:02:29.105156 1181050 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1024 20:02:29.105165 1181050 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1024 20:02:29.105169 1181050 command_runner.go:130] > #
	I1024 20:02:29.105177 1181050 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1024 20:02:29.105186 1181050 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1024 20:02:29.105203 1181050 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1024 20:02:29.105215 1181050 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1024 20:02:29.105220 1181050 command_runner.go:130] > # reload'.
	I1024 20:02:29.105232 1181050 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1024 20:02:29.105240 1181050 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1024 20:02:29.105250 1181050 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1024 20:02:29.105258 1181050 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1024 20:02:29.105271 1181050 command_runner.go:130] > [crio]
	I1024 20:02:29.105279 1181050 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1024 20:02:29.105288 1181050 command_runner.go:130] > # containers images, in this directory.
	I1024 20:02:29.105301 1181050 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1024 20:02:29.105312 1181050 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1024 20:02:29.105319 1181050 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1024 20:02:29.105329 1181050 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1024 20:02:29.105337 1181050 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1024 20:02:29.105352 1181050 command_runner.go:130] > # storage_driver = "vfs"
	I1024 20:02:29.105360 1181050 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1024 20:02:29.105370 1181050 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1024 20:02:29.105375 1181050 command_runner.go:130] > # storage_option = [
	I1024 20:02:29.105379 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.105389 1181050 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1024 20:02:29.105400 1181050 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1024 20:02:29.105407 1181050 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1024 20:02:29.105423 1181050 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1024 20:02:29.105434 1181050 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1024 20:02:29.105440 1181050 command_runner.go:130] > # always happen on a node reboot
	I1024 20:02:29.105449 1181050 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1024 20:02:29.105456 1181050 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1024 20:02:29.105463 1181050 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1024 20:02:29.105481 1181050 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1024 20:02:29.105499 1181050 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1024 20:02:29.105510 1181050 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1024 20:02:29.105522 1181050 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1024 20:02:29.105531 1181050 command_runner.go:130] > # internal_wipe = true
	I1024 20:02:29.105544 1181050 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1024 20:02:29.105552 1181050 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1024 20:02:29.105561 1181050 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1024 20:02:29.105574 1181050 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1024 20:02:29.105585 1181050 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1024 20:02:29.105591 1181050 command_runner.go:130] > [crio.api]
	I1024 20:02:29.105600 1181050 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1024 20:02:29.105606 1181050 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1024 20:02:29.105615 1181050 command_runner.go:130] > # IP address on which the stream server will listen.
	I1024 20:02:29.105620 1181050 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1024 20:02:29.105630 1181050 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1024 20:02:29.105637 1181050 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1024 20:02:29.105656 1181050 command_runner.go:130] > # stream_port = "0"
	I1024 20:02:29.105666 1181050 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1024 20:02:29.105674 1181050 command_runner.go:130] > # stream_enable_tls = false
	I1024 20:02:29.105681 1181050 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1024 20:02:29.105689 1181050 command_runner.go:130] > # stream_idle_timeout = ""
	I1024 20:02:29.105697 1181050 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1024 20:02:29.105707 1181050 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1024 20:02:29.105712 1181050 command_runner.go:130] > # minutes.
	I1024 20:02:29.105723 1181050 command_runner.go:130] > # stream_tls_cert = ""
	I1024 20:02:29.105744 1181050 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1024 20:02:29.105752 1181050 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1024 20:02:29.105759 1181050 command_runner.go:130] > # stream_tls_key = ""
	I1024 20:02:29.105769 1181050 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1024 20:02:29.105777 1181050 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1024 20:02:29.105787 1181050 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1024 20:02:29.105798 1181050 command_runner.go:130] > # stream_tls_ca = ""
	I1024 20:02:29.105810 1181050 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 20:02:29.105816 1181050 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1024 20:02:29.105828 1181050 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1024 20:02:29.105837 1181050 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1024 20:02:29.105887 1181050 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1024 20:02:29.105898 1181050 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1024 20:02:29.105903 1181050 command_runner.go:130] > [crio.runtime]
	I1024 20:02:29.105914 1181050 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1024 20:02:29.105925 1181050 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1024 20:02:29.105930 1181050 command_runner.go:130] > # "nofile=1024:2048"
	I1024 20:02:29.105948 1181050 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1024 20:02:29.105956 1181050 command_runner.go:130] > # default_ulimits = [
	I1024 20:02:29.105962 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.105972 1181050 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1024 20:02:29.105978 1181050 command_runner.go:130] > # no_pivot = false
	I1024 20:02:29.105985 1181050 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1024 20:02:29.105995 1181050 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1024 20:02:29.106003 1181050 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1024 20:02:29.106013 1181050 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1024 20:02:29.106025 1181050 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1024 20:02:29.106037 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 20:02:29.106044 1181050 command_runner.go:130] > # conmon = ""
	I1024 20:02:29.106053 1181050 command_runner.go:130] > # Cgroup setting for conmon
	I1024 20:02:29.106061 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1024 20:02:29.106069 1181050 command_runner.go:130] > conmon_cgroup = "pod"
	I1024 20:02:29.106077 1181050 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1024 20:02:29.106084 1181050 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1024 20:02:29.106101 1181050 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1024 20:02:29.106111 1181050 command_runner.go:130] > # conmon_env = [
	I1024 20:02:29.106115 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106125 1181050 command_runner.go:130] > # Additional environment variables to set for all the
	I1024 20:02:29.106131 1181050 command_runner.go:130] > # containers. These are overridden if set in the
	I1024 20:02:29.106141 1181050 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1024 20:02:29.106146 1181050 command_runner.go:130] > # default_env = [
	I1024 20:02:29.106153 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106160 1181050 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1024 20:02:29.106171 1181050 command_runner.go:130] > # selinux = false
	I1024 20:02:29.106182 1181050 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1024 20:02:29.106190 1181050 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1024 20:02:29.106201 1181050 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1024 20:02:29.106207 1181050 command_runner.go:130] > # seccomp_profile = ""
	I1024 20:02:29.106216 1181050 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1024 20:02:29.106224 1181050 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1024 20:02:29.106234 1181050 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1024 20:02:29.106246 1181050 command_runner.go:130] > # which might increase security.
	I1024 20:02:29.106256 1181050 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1024 20:02:29.106263 1181050 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1024 20:02:29.106273 1181050 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1024 20:02:29.106283 1181050 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1024 20:02:29.106291 1181050 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1024 20:02:29.106301 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:02:29.106306 1181050 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1024 20:02:29.106321 1181050 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1024 20:02:29.106331 1181050 command_runner.go:130] > # the cgroup blockio controller.
	I1024 20:02:29.106336 1181050 command_runner.go:130] > # blockio_config_file = ""
	I1024 20:02:29.106344 1181050 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1024 20:02:29.106352 1181050 command_runner.go:130] > # irqbalance daemon.
	I1024 20:02:29.106359 1181050 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1024 20:02:29.106370 1181050 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1024 20:02:29.106379 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:02:29.106385 1181050 command_runner.go:130] > # rdt_config_file = ""
	I1024 20:02:29.106399 1181050 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1024 20:02:29.106408 1181050 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1024 20:02:29.106415 1181050 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1024 20:02:29.106423 1181050 command_runner.go:130] > # separate_pull_cgroup = ""
	I1024 20:02:29.106431 1181050 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1024 20:02:29.106442 1181050 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1024 20:02:29.106446 1181050 command_runner.go:130] > # will be added.
	I1024 20:02:29.106452 1181050 command_runner.go:130] > # default_capabilities = [
	I1024 20:02:29.106459 1181050 command_runner.go:130] > # 	"CHOWN",
	I1024 20:02:29.106470 1181050 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1024 20:02:29.106478 1181050 command_runner.go:130] > # 	"FSETID",
	I1024 20:02:29.106483 1181050 command_runner.go:130] > # 	"FOWNER",
	I1024 20:02:29.106488 1181050 command_runner.go:130] > # 	"SETGID",
	I1024 20:02:29.106495 1181050 command_runner.go:130] > # 	"SETUID",
	I1024 20:02:29.106501 1181050 command_runner.go:130] > # 	"SETPCAP",
	I1024 20:02:29.106506 1181050 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1024 20:02:29.106511 1181050 command_runner.go:130] > # 	"KILL",
	I1024 20:02:29.106517 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106527 1181050 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1024 20:02:29.106545 1181050 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1024 20:02:29.106558 1181050 command_runner.go:130] > # add_inheritable_capabilities = true
	I1024 20:02:29.106566 1181050 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1024 20:02:29.106573 1181050 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 20:02:29.106581 1181050 command_runner.go:130] > # default_sysctls = [
	I1024 20:02:29.106590 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106596 1181050 command_runner.go:130] > # List of devices on the host that a
	I1024 20:02:29.106604 1181050 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1024 20:02:29.106620 1181050 command_runner.go:130] > # allowed_devices = [
	I1024 20:02:29.106625 1181050 command_runner.go:130] > # 	"/dev/fuse",
	I1024 20:02:29.106632 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106638 1181050 command_runner.go:130] > # List of additional devices. specified as
	I1024 20:02:29.106677 1181050 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1024 20:02:29.106700 1181050 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1024 20:02:29.106710 1181050 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1024 20:02:29.106718 1181050 command_runner.go:130] > # additional_devices = [
	I1024 20:02:29.106722 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106729 1181050 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1024 20:02:29.106737 1181050 command_runner.go:130] > # cdi_spec_dirs = [
	I1024 20:02:29.106741 1181050 command_runner.go:130] > # 	"/etc/cdi",
	I1024 20:02:29.106747 1181050 command_runner.go:130] > # 	"/var/run/cdi",
	I1024 20:02:29.106753 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106766 1181050 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1024 20:02:29.106778 1181050 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1024 20:02:29.106783 1181050 command_runner.go:130] > # Defaults to false.
	I1024 20:02:29.106789 1181050 command_runner.go:130] > # device_ownership_from_security_context = false
	I1024 20:02:29.106799 1181050 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1024 20:02:29.106807 1181050 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1024 20:02:29.106815 1181050 command_runner.go:130] > # hooks_dir = [
	I1024 20:02:29.106821 1181050 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1024 20:02:29.106827 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.106841 1181050 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1024 20:02:29.106853 1181050 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1024 20:02:29.106860 1181050 command_runner.go:130] > # its default mounts from the following two files:
	I1024 20:02:29.106867 1181050 command_runner.go:130] > #
	I1024 20:02:29.106874 1181050 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1024 20:02:29.106882 1181050 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1024 20:02:29.106891 1181050 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1024 20:02:29.106896 1181050 command_runner.go:130] > #
	I1024 20:02:29.106909 1181050 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1024 20:02:29.106926 1181050 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1024 20:02:29.106941 1181050 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1024 20:02:29.106951 1181050 command_runner.go:130] > #      only add mounts it finds in this file.
	I1024 20:02:29.106955 1181050 command_runner.go:130] > #
	I1024 20:02:29.106960 1181050 command_runner.go:130] > # default_mounts_file = ""
	I1024 20:02:29.106969 1181050 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1024 20:02:29.106979 1181050 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1024 20:02:29.106994 1181050 command_runner.go:130] > # pids_limit = 0
	I1024 20:02:29.107007 1181050 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1024 20:02:29.107015 1181050 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1024 20:02:29.107026 1181050 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1024 20:02:29.107036 1181050 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1024 20:02:29.107112 1181050 command_runner.go:130] > # log_size_max = -1
	I1024 20:02:29.107130 1181050 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1024 20:02:29.107136 1181050 command_runner.go:130] > # log_to_journald = false
	I1024 20:02:29.107144 1181050 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1024 20:02:29.107155 1181050 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1024 20:02:29.107164 1181050 command_runner.go:130] > # Path to directory for container attach sockets.
	I1024 20:02:29.107171 1181050 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1024 20:02:29.107190 1181050 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1024 20:02:29.107199 1181050 command_runner.go:130] > # bind_mount_prefix = ""
	I1024 20:02:29.107206 1181050 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1024 20:02:29.107213 1181050 command_runner.go:130] > # read_only = false
	I1024 20:02:29.107221 1181050 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1024 20:02:29.107231 1181050 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1024 20:02:29.107241 1181050 command_runner.go:130] > # live configuration reload.
	I1024 20:02:29.107246 1181050 command_runner.go:130] > # log_level = "info"
	I1024 20:02:29.107261 1181050 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1024 20:02:29.107273 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:02:29.107278 1181050 command_runner.go:130] > # log_filter = ""
	I1024 20:02:29.107285 1181050 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1024 20:02:29.107296 1181050 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1024 20:02:29.107301 1181050 command_runner.go:130] > # separated by comma.
	I1024 20:02:29.107306 1181050 command_runner.go:130] > # uid_mappings = ""
	I1024 20:02:29.107317 1181050 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1024 20:02:29.107333 1181050 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1024 20:02:29.107342 1181050 command_runner.go:130] > # separated by comma.
	I1024 20:02:29.107347 1181050 command_runner.go:130] > # gid_mappings = ""
	I1024 20:02:29.107358 1181050 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1024 20:02:29.107367 1181050 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 20:02:29.107378 1181050 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 20:02:29.107384 1181050 command_runner.go:130] > # minimum_mappable_uid = -1
	I1024 20:02:29.107391 1181050 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1024 20:02:29.107401 1181050 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1024 20:02:29.107416 1181050 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1024 20:02:29.107425 1181050 command_runner.go:130] > # minimum_mappable_gid = -1
	I1024 20:02:29.107433 1181050 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1024 20:02:29.107443 1181050 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1024 20:02:29.107451 1181050 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1024 20:02:29.107459 1181050 command_runner.go:130] > # ctr_stop_timeout = 30
	I1024 20:02:29.107467 1181050 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1024 20:02:29.107537 1181050 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1024 20:02:29.107562 1181050 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1024 20:02:29.107571 1181050 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1024 20:02:29.107579 1181050 command_runner.go:130] > # drop_infra_ctr = true
	I1024 20:02:29.107587 1181050 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1024 20:02:29.107597 1181050 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1024 20:02:29.107606 1181050 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1024 20:02:29.107611 1181050 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1024 20:02:29.107620 1181050 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1024 20:02:29.107635 1181050 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1024 20:02:29.107641 1181050 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1024 20:02:29.107652 1181050 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1024 20:02:29.107659 1181050 command_runner.go:130] > # pinns_path = ""
	I1024 20:02:29.107667 1181050 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1024 20:02:29.107679 1181050 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1024 20:02:29.107687 1181050 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1024 20:02:29.107692 1181050 command_runner.go:130] > # default_runtime = "runc"
	I1024 20:02:29.107701 1181050 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1024 20:02:29.107721 1181050 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1024 20:02:29.107738 1181050 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1024 20:02:29.107745 1181050 command_runner.go:130] > # creation as a file is not desired either.
	I1024 20:02:29.107755 1181050 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1024 20:02:29.107765 1181050 command_runner.go:130] > # the hostname is being managed dynamically.
	I1024 20:02:29.107771 1181050 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1024 20:02:29.107775 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.107789 1181050 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1024 20:02:29.107800 1181050 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1024 20:02:29.107808 1181050 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1024 20:02:29.107819 1181050 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1024 20:02:29.107823 1181050 command_runner.go:130] > #
	I1024 20:02:29.107836 1181050 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1024 20:02:29.107846 1181050 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1024 20:02:29.107851 1181050 command_runner.go:130] > #  runtime_type = "oci"
	I1024 20:02:29.107865 1181050 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1024 20:02:29.107873 1181050 command_runner.go:130] > #  privileged_without_host_devices = false
	I1024 20:02:29.107881 1181050 command_runner.go:130] > #  allowed_annotations = []
	I1024 20:02:29.107885 1181050 command_runner.go:130] > # Where:
	I1024 20:02:29.107892 1181050 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1024 20:02:29.107902 1181050 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1024 20:02:29.107910 1181050 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1024 20:02:29.107921 1181050 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1024 20:02:29.107926 1181050 command_runner.go:130] > #   in $PATH.
	I1024 20:02:29.107941 1181050 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1024 20:02:29.107950 1181050 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1024 20:02:29.107958 1181050 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1024 20:02:29.107966 1181050 command_runner.go:130] > #   state.
	I1024 20:02:29.107974 1181050 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1024 20:02:29.107983 1181050 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1024 20:02:29.107994 1181050 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1024 20:02:29.108010 1181050 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1024 20:02:29.108021 1181050 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1024 20:02:29.108030 1181050 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1024 20:02:29.108037 1181050 command_runner.go:130] > #   The currently recognized values are:
	I1024 20:02:29.108045 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1024 20:02:29.108057 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1024 20:02:29.108064 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1024 20:02:29.108075 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1024 20:02:29.108091 1181050 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1024 20:02:29.108103 1181050 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1024 20:02:29.108110 1181050 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1024 20:02:29.108122 1181050 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1024 20:02:29.108128 1181050 command_runner.go:130] > #   should be moved to the container's cgroup
	I1024 20:02:29.108133 1181050 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1024 20:02:29.108145 1181050 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1024 20:02:29.108150 1181050 command_runner.go:130] > runtime_type = "oci"
	I1024 20:02:29.108161 1181050 command_runner.go:130] > runtime_root = "/run/runc"
	I1024 20:02:29.108173 1181050 command_runner.go:130] > runtime_config_path = ""
	I1024 20:02:29.108178 1181050 command_runner.go:130] > monitor_path = ""
	I1024 20:02:29.108188 1181050 command_runner.go:130] > monitor_cgroup = ""
	I1024 20:02:29.108193 1181050 command_runner.go:130] > monitor_exec_cgroup = ""
	I1024 20:02:29.108292 1181050 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1024 20:02:29.108309 1181050 command_runner.go:130] > # running containers
	I1024 20:02:29.108316 1181050 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1024 20:02:29.108323 1181050 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1024 20:02:29.108341 1181050 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1024 20:02:29.108352 1181050 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1024 20:02:29.108360 1181050 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1024 20:02:29.108368 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1024 20:02:29.108374 1181050 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1024 20:02:29.108379 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1024 20:02:29.108388 1181050 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1024 20:02:29.108394 1181050 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1024 20:02:29.108414 1181050 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1024 20:02:29.108424 1181050 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1024 20:02:29.108445 1181050 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1024 20:02:29.108458 1181050 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1024 20:02:29.108468 1181050 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1024 20:02:29.108478 1181050 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1024 20:02:29.108495 1181050 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1024 20:02:29.108508 1181050 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1024 20:02:29.108520 1181050 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1024 20:02:29.108529 1181050 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1024 20:02:29.108537 1181050 command_runner.go:130] > # Example:
	I1024 20:02:29.108542 1181050 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1024 20:02:29.108551 1181050 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1024 20:02:29.108564 1181050 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1024 20:02:29.108576 1181050 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1024 20:02:29.108581 1181050 command_runner.go:130] > # cpuset = 0
	I1024 20:02:29.108586 1181050 command_runner.go:130] > # cpushares = "0-1"
	I1024 20:02:29.108593 1181050 command_runner.go:130] > # Where:
	I1024 20:02:29.108599 1181050 command_runner.go:130] > # The workload name is workload-type.
	I1024 20:02:29.108610 1181050 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1024 20:02:29.108618 1181050 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1024 20:02:29.108629 1181050 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1024 20:02:29.108645 1181050 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1024 20:02:29.108656 1181050 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1024 20:02:29.108660 1181050 command_runner.go:130] > # 
	I1024 20:02:29.108671 1181050 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1024 20:02:29.108675 1181050 command_runner.go:130] > #
	I1024 20:02:29.108682 1181050 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1024 20:02:29.108692 1181050 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1024 20:02:29.108700 1181050 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1024 20:02:29.108717 1181050 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1024 20:02:29.108727 1181050 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1024 20:02:29.108733 1181050 command_runner.go:130] > [crio.image]
	I1024 20:02:29.108742 1181050 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1024 20:02:29.108748 1181050 command_runner.go:130] > # default_transport = "docker://"
	I1024 20:02:29.108759 1181050 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1024 20:02:29.108767 1181050 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1024 20:02:29.108773 1181050 command_runner.go:130] > # global_auth_file = ""
	I1024 20:02:29.108790 1181050 command_runner.go:130] > # The image used to instantiate infra containers.
	I1024 20:02:29.108802 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:02:29.108809 1181050 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1024 20:02:29.108817 1181050 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1024 20:02:29.108829 1181050 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1024 20:02:29.108836 1181050 command_runner.go:130] > # This option supports live configuration reload.
	I1024 20:02:29.108844 1181050 command_runner.go:130] > # pause_image_auth_file = ""
	I1024 20:02:29.108851 1181050 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1024 20:02:29.108864 1181050 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1024 20:02:29.108875 1181050 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1024 20:02:29.108882 1181050 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1024 20:02:29.108890 1181050 command_runner.go:130] > # pause_command = "/pause"
	I1024 20:02:29.108898 1181050 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1024 20:02:29.108908 1181050 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1024 20:02:29.108916 1181050 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1024 20:02:29.108927 1181050 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1024 20:02:29.108933 1181050 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1024 20:02:29.108941 1181050 command_runner.go:130] > # signature_policy = ""
	I1024 20:02:29.109013 1181050 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1024 20:02:29.109030 1181050 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1024 20:02:29.109037 1181050 command_runner.go:130] > # changing them here.
	I1024 20:02:29.109042 1181050 command_runner.go:130] > # insecure_registries = [
	I1024 20:02:29.109047 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.109054 1181050 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1024 20:02:29.109063 1181050 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1024 20:02:29.109069 1181050 command_runner.go:130] > # image_volumes = "mkdir"
	I1024 20:02:29.109075 1181050 command_runner.go:130] > # Temporary directory to use for storing big files
	I1024 20:02:29.109093 1181050 command_runner.go:130] > # big_files_temporary_dir = ""
	I1024 20:02:29.109105 1181050 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1024 20:02:29.109110 1181050 command_runner.go:130] > # CNI plugins.
	I1024 20:02:29.109117 1181050 command_runner.go:130] > [crio.network]
	I1024 20:02:29.109124 1181050 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1024 20:02:29.109131 1181050 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1024 20:02:29.109136 1181050 command_runner.go:130] > # cni_default_network = ""
	I1024 20:02:29.109146 1181050 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1024 20:02:29.109152 1181050 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1024 20:02:29.109163 1181050 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1024 20:02:29.109168 1181050 command_runner.go:130] > # plugin_dirs = [
	I1024 20:02:29.109175 1181050 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1024 20:02:29.109179 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.109187 1181050 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1024 20:02:29.109194 1181050 command_runner.go:130] > [crio.metrics]
	I1024 20:02:29.109200 1181050 command_runner.go:130] > # Globally enable or disable metrics support.
	I1024 20:02:29.109208 1181050 command_runner.go:130] > # enable_metrics = false
	I1024 20:02:29.109214 1181050 command_runner.go:130] > # Specify enabled metrics collectors.
	I1024 20:02:29.109220 1181050 command_runner.go:130] > # Per default all metrics are enabled.
	I1024 20:02:29.109230 1181050 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1024 20:02:29.109241 1181050 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1024 20:02:29.109248 1181050 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1024 20:02:29.109256 1181050 command_runner.go:130] > # metrics_collectors = [
	I1024 20:02:29.109261 1181050 command_runner.go:130] > # 	"operations",
	I1024 20:02:29.109267 1181050 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1024 20:02:29.109276 1181050 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1024 20:02:29.109281 1181050 command_runner.go:130] > # 	"operations_errors",
	I1024 20:02:29.109289 1181050 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1024 20:02:29.109294 1181050 command_runner.go:130] > # 	"image_pulls_by_name",
	I1024 20:02:29.109300 1181050 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1024 20:02:29.109312 1181050 command_runner.go:130] > # 	"image_pulls_failures",
	I1024 20:02:29.109321 1181050 command_runner.go:130] > # 	"image_pulls_successes",
	I1024 20:02:29.109327 1181050 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1024 20:02:29.109334 1181050 command_runner.go:130] > # 	"image_layer_reuse",
	I1024 20:02:29.109340 1181050 command_runner.go:130] > # 	"containers_oom_total",
	I1024 20:02:29.109345 1181050 command_runner.go:130] > # 	"containers_oom",
	I1024 20:02:29.109353 1181050 command_runner.go:130] > # 	"processes_defunct",
	I1024 20:02:29.109358 1181050 command_runner.go:130] > # 	"operations_total",
	I1024 20:02:29.109364 1181050 command_runner.go:130] > # 	"operations_latency_seconds",
	I1024 20:02:29.109373 1181050 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1024 20:02:29.109379 1181050 command_runner.go:130] > # 	"operations_errors_total",
	I1024 20:02:29.109385 1181050 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1024 20:02:29.109391 1181050 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1024 20:02:29.109399 1181050 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1024 20:02:29.109404 1181050 command_runner.go:130] > # 	"image_pulls_success_total",
	I1024 20:02:29.109412 1181050 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1024 20:02:29.109422 1181050 command_runner.go:130] > # 	"containers_oom_count_total",
	I1024 20:02:29.109429 1181050 command_runner.go:130] > # ]
	I1024 20:02:29.109436 1181050 command_runner.go:130] > # The port on which the metrics server will listen.
	I1024 20:02:29.109443 1181050 command_runner.go:130] > # metrics_port = 9090
	I1024 20:02:29.109450 1181050 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1024 20:02:29.109457 1181050 command_runner.go:130] > # metrics_socket = ""
	I1024 20:02:29.109463 1181050 command_runner.go:130] > # The certificate for the secure metrics server.
	I1024 20:02:29.109471 1181050 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1024 20:02:29.109482 1181050 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1024 20:02:29.109489 1181050 command_runner.go:130] > # certificate on any modification event.
	I1024 20:02:29.109497 1181050 command_runner.go:130] > # metrics_cert = ""
	I1024 20:02:29.109504 1181050 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1024 20:02:29.109513 1181050 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1024 20:02:29.109517 1181050 command_runner.go:130] > # metrics_key = ""
	I1024 20:02:29.109527 1181050 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1024 20:02:29.109534 1181050 command_runner.go:130] > [crio.tracing]
	I1024 20:02:29.109541 1181050 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1024 20:02:29.109546 1181050 command_runner.go:130] > # enable_tracing = false
	I1024 20:02:29.109553 1181050 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1024 20:02:29.109561 1181050 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1024 20:02:29.109568 1181050 command_runner.go:130] > # Number of samples to collect per million spans.
	I1024 20:02:29.109576 1181050 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1024 20:02:29.109583 1181050 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1024 20:02:29.109591 1181050 command_runner.go:130] > [crio.stats]
	I1024 20:02:29.109598 1181050 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1024 20:02:29.109607 1181050 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1024 20:02:29.109613 1181050 command_runner.go:130] > # stats_collection_period = 0
	I1024 20:02:29.110205 1181050 cni.go:84] Creating CNI manager for ""
	I1024 20:02:29.110223 1181050 cni.go:136] 2 nodes found, recommending kindnet
	I1024 20:02:29.110240 1181050 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1024 20:02:29.110260 1181050 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-773966 NodeName:multinode-773966-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 20:02:29.110389 1181050 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-773966-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 20:02:29.110481 1181050 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-773966-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 20:02:29.110552 1181050 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 20:02:29.121121 1181050 command_runner.go:130] > kubeadm
	I1024 20:02:29.121140 1181050 command_runner.go:130] > kubectl
	I1024 20:02:29.121146 1181050 command_runner.go:130] > kubelet
	I1024 20:02:29.121181 1181050 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 20:02:29.121242 1181050 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1024 20:02:29.132222 1181050 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1024 20:02:29.153027 1181050 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 20:02:29.174703 1181050 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1024 20:02:29.178981 1181050 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 20:02:29.192451 1181050 host.go:66] Checking if "multinode-773966" exists ...
	I1024 20:02:29.192703 1181050 start.go:304] JoinCluster: &{Name:multinode-773966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-773966 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 20:02:29.192790 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1024 20:02:29.192839 1181050 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:02:29.193668 1181050 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:02:29.212969 1181050 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:02:29.393493 1181050 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token jmah3e.b4ucbguq4d0rg5bi --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 
	I1024 20:02:29.393558 1181050 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 20:02:29.393591 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jmah3e.b4ucbguq4d0rg5bi --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-773966-m02"
	I1024 20:02:29.436250 1181050 command_runner.go:130] > [preflight] Running pre-flight checks
	I1024 20:02:29.476442 1181050 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1024 20:02:29.476466 1181050 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1048-aws
	I1024 20:02:29.476473 1181050 command_runner.go:130] > OS: Linux
	I1024 20:02:29.476479 1181050 command_runner.go:130] > CGROUPS_CPU: enabled
	I1024 20:02:29.476486 1181050 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1024 20:02:29.476492 1181050 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1024 20:02:29.476499 1181050 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1024 20:02:29.476505 1181050 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1024 20:02:29.476520 1181050 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1024 20:02:29.476536 1181050 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1024 20:02:29.476542 1181050 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1024 20:02:29.476552 1181050 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1024 20:02:29.588932 1181050 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1024 20:02:29.588971 1181050 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1024 20:02:29.620540 1181050 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 20:02:29.620953 1181050 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 20:02:29.621247 1181050 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1024 20:02:29.730150 1181050 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1024 20:02:32.746853 1181050 command_runner.go:130] > This node has joined the cluster:
	I1024 20:02:32.746877 1181050 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1024 20:02:32.746885 1181050 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1024 20:02:32.746893 1181050 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1024 20:02:32.750218 1181050 command_runner.go:130] ! W1024 20:02:29.435624    1025 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1024 20:02:32.750250 1181050 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1048-aws\n", err: exit status 1
	I1024 20:02:32.750264 1181050 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 20:02:32.750277 1181050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token jmah3e.b4ucbguq4d0rg5bi --discovery-token-ca-cert-hash sha256:fc1ef00058459f2fefb2f373ccf3e7b4cb2e0359fefe1d46b340ed2a4318b3f5 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-773966-m02": (3.356668795s)
	I1024 20:02:32.750292 1181050 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1024 20:02:32.969328 1181050 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1024 20:02:32.969359 1181050 start.go:306] JoinCluster complete in 3.776655197s
	I1024 20:02:32.969371 1181050 cni.go:84] Creating CNI manager for ""
	I1024 20:02:32.969376 1181050 cni.go:136] 2 nodes found, recommending kindnet
	I1024 20:02:32.969430 1181050 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1024 20:02:32.974509 1181050 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1024 20:02:32.974531 1181050 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1024 20:02:32.974538 1181050 command_runner.go:130] > Device: 3ah/58d	Inode: 1573330     Links: 1
	I1024 20:02:32.974546 1181050 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1024 20:02:32.974553 1181050 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1024 20:02:32.974575 1181050 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1024 20:02:32.974581 1181050 command_runner.go:130] > Change: 2023-10-24 19:24:00.149156281 +0000
	I1024 20:02:32.974587 1181050 command_runner.go:130] >  Birth: 2023-10-24 19:24:00.101156460 +0000
	I1024 20:02:32.974628 1181050 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1024 20:02:32.974635 1181050 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1024 20:02:33.003069 1181050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1024 20:02:33.322889 1181050 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1024 20:02:33.327819 1181050 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1024 20:02:33.336573 1181050 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1024 20:02:33.352154 1181050 command_runner.go:130] > daemonset.apps/kindnet configured
	I1024 20:02:33.364224 1181050 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:02:33.364574 1181050 kapi.go:59] client config for multinode-773966: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 20:02:33.364995 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1024 20:02:33.365030 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:33.365053 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:33.365088 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:33.368051 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:33.368111 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:33.368131 1181050 round_trippers.go:580]     Audit-Id: cfdcf2f8-7dc9-49c6-b951-61712b3eeb2d
	I1024 20:02:33.368150 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:33.368182 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:33.368206 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:33.368225 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:33.368242 1181050 round_trippers.go:580]     Content-Length: 291
	I1024 20:02:33.368278 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:33 GMT
	I1024 20:02:33.368480 1181050 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"880e54a6-86b9-4b4b-bfb6-0a1742a3b535","resourceVersion":"411","creationTimestamp":"2023-10-24T20:01:29Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1024 20:02:33.368628 1181050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-773966" context rescaled to 1 replicas
	I1024 20:02:33.368687 1181050 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1024 20:02:33.373643 1181050 out.go:177] * Verifying Kubernetes components...
	I1024 20:02:33.377010 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:02:33.396375 1181050 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:02:33.396639 1181050 kapi.go:59] client config for multinode-773966: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.crt", KeyFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/multinode-773966/client.key", CAFile:"/home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c9c60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1024 20:02:33.396902 1181050 node_ready.go:35] waiting up to 6m0s for node "multinode-773966-m02" to be "Ready" ...
	I1024 20:02:33.396972 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:33.396983 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:33.396993 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:33.397006 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:33.399525 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:33.399543 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:33.399551 1181050 round_trippers.go:580]     Audit-Id: 042f9de8-9846-4f4e-b377-799bff2eb04e
	I1024 20:02:33.399558 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:33.399564 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:33.399570 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:33.399578 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:33.399595 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:33 GMT
	I1024 20:02:33.400050 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:33.400456 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:33.400472 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:33.400481 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:33.400487 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:33.402941 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:33.402964 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:33.402972 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:33 GMT
	I1024 20:02:33.402978 1181050 round_trippers.go:580]     Audit-Id: 60e53004-6fd0-4cd6-8f8d-74cf18365a83
	I1024 20:02:33.402984 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:33.402991 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:33.402997 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:33.403009 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:33.403456 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:33.904525 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:33.904546 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:33.904556 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:33.904563 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:33.907099 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:33.907156 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:33.907178 1181050 round_trippers.go:580]     Audit-Id: d7fd8b65-4d66-451b-80f3-bead127b8f9b
	I1024 20:02:33.907196 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:33.907230 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:33.907252 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:33.907270 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:33.907289 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:33 GMT
	I1024 20:02:33.907455 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:34.404091 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:34.404113 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:34.404124 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:34.404133 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:34.406632 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:34.406663 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:34.406671 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:34 GMT
	I1024 20:02:34.406678 1181050 round_trippers.go:580]     Audit-Id: c43d40f9-6d18-45eb-ab91-0c2379c2c2f9
	I1024 20:02:34.406684 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:34.406690 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:34.406696 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:34.406703 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:34.406814 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:34.904909 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:34.904932 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:34.904942 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:34.904953 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:34.907476 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:34.907532 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:34.907571 1181050 round_trippers.go:580]     Audit-Id: c881be7c-785c-4629-ae36-dd4e95c4c642
	I1024 20:02:34.907598 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:34.907614 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:34.907621 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:34.907627 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:34.907649 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:34 GMT
	I1024 20:02:34.907816 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:35.404938 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:35.404961 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:35.404971 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:35.404978 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:35.407649 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:35.407723 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:35.407745 1181050 round_trippers.go:580]     Audit-Id: 71e47315-8405-4d69-b775-0acb0720f5bd
	I1024 20:02:35.407765 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:35.407777 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:35.407802 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:35.407809 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:35.407820 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:35 GMT
	I1024 20:02:35.407936 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:35.408329 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:35.904054 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:35.904078 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:35.904088 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:35.904095 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:35.906749 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:35.906787 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:35.906795 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:35.906801 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:35.906808 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:35.906826 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:35 GMT
	I1024 20:02:35.906841 1181050 round_trippers.go:580]     Audit-Id: b796dc8e-2792-4f54-98fa-55f0cf023136
	I1024 20:02:35.906860 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:35.907037 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"449","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1024 20:02:36.404021 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:36.404047 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:36.404058 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:36.404065 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:36.406575 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:36.406637 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:36.406659 1181050 round_trippers.go:580]     Audit-Id: 314767c1-d474-4249-b1ed-5470cd8caa7e
	I1024 20:02:36.406678 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:36.406708 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:36.406716 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:36.406722 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:36.406730 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:36 GMT
	I1024 20:02:36.406840 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:36.904215 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:36.904240 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:36.904250 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:36.904257 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:36.907852 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:36.907879 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:36.907888 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:36 GMT
	I1024 20:02:36.907894 1181050 round_trippers.go:580]     Audit-Id: 7803c94e-dc1f-4ffc-bbae-3ccd0f7ff281
	I1024 20:02:36.907905 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:36.907912 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:36.907920 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:36.907927 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:36.908146 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:37.404676 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:37.404699 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:37.404710 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:37.404719 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:37.407257 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:37.407284 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:37.407293 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:37 GMT
	I1024 20:02:37.407301 1181050 round_trippers.go:580]     Audit-Id: d99a67ec-f46f-4b94-bfeb-79f83e3aec5b
	I1024 20:02:37.407307 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:37.407314 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:37.407324 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:37.407331 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:37.407447 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:37.904563 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:37.904586 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:37.904596 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:37.904603 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:37.907123 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:37.907144 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:37.907153 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:37.907161 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:37 GMT
	I1024 20:02:37.907167 1181050 round_trippers.go:580]     Audit-Id: bb0c27de-9ae9-4057-8ffc-a231a527a06d
	I1024 20:02:37.907174 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:37.907180 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:37.907186 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:37.907537 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:37.907892 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:38.404128 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:38.404150 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:38.404162 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:38.404170 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:38.406875 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:38.406900 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:38.406909 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:38.406917 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:38 GMT
	I1024 20:02:38.406923 1181050 round_trippers.go:580]     Audit-Id: 7ecf087f-e18a-4f65-adf7-8ef42c249b94
	I1024 20:02:38.406929 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:38.406935 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:38.406941 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:38.407344 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:38.904364 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:38.904384 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:38.904398 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:38.904410 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:38.906815 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:38.906841 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:38.906852 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:38.906863 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:38.906870 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:38.906877 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:38.906887 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:38 GMT
	I1024 20:02:38.906903 1181050 round_trippers.go:580]     Audit-Id: 715c4104-1208-4dd4-a259-cbbc817c52bc
	I1024 20:02:38.907047 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:39.404024 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:39.404049 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:39.404061 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:39.404069 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:39.407351 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:39.407369 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:39.407378 1181050 round_trippers.go:580]     Audit-Id: c5ea8765-d10f-42dc-a6ab-86954f32122e
	I1024 20:02:39.407384 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:39.407390 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:39.407396 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:39.407402 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:39.407409 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:39 GMT
	I1024 20:02:39.407520 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:39.904299 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:39.904324 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:39.904338 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:39.904345 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:39.906838 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:39.906863 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:39.906871 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:39.906882 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:39.906890 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:39 GMT
	I1024 20:02:39.906896 1181050 round_trippers.go:580]     Audit-Id: 4ae3b96c-2a84-4fce-a653-e86f8017ab47
	I1024 20:02:39.906902 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:39.906908 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:39.907089 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:40.404019 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:40.404041 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:40.404053 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:40.404060 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:40.406829 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:40.406849 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:40.406858 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:40.406864 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:40.406871 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:40.406877 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:40.406884 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:40 GMT
	I1024 20:02:40.406891 1181050 round_trippers.go:580]     Audit-Id: 960c0f81-c0c1-41b1-a903-fa7c48b5489d
	I1024 20:02:40.407901 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:40.408278 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:40.904559 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:40.904579 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:40.904589 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:40.904596 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:40.907391 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:40.907410 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:40.907418 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:40.907426 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:40.907433 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:40 GMT
	I1024 20:02:40.907439 1181050 round_trippers.go:580]     Audit-Id: 00e0d8e6-cd07-4907-bb98-a5151fd5afaf
	I1024 20:02:40.907445 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:40.907451 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:40.907604 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:41.404531 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:41.404555 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:41.404565 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:41.404572 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:41.407047 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:41.407074 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:41.407083 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:41.407089 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:41.407096 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:41.407102 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:41 GMT
	I1024 20:02:41.407110 1181050 round_trippers.go:580]     Audit-Id: 84801d3b-dbfe-4df9-a3fe-6a5f105b3b77
	I1024 20:02:41.407119 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:41.407231 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:41.904646 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:41.904669 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:41.904678 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:41.904685 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:41.907061 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:41.907080 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:41.907088 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:41.907095 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:41.907101 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:41.907107 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:41 GMT
	I1024 20:02:41.907113 1181050 round_trippers.go:580]     Audit-Id: 0d39bfc0-adaf-4d9e-be9a-a795534fd943
	I1024 20:02:41.907119 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:41.907240 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:42.404241 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:42.404275 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:42.404286 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:42.404294 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:42.406874 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:42.406900 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:42.406910 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:42.406916 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:42.406925 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:42 GMT
	I1024 20:02:42.406933 1181050 round_trippers.go:580]     Audit-Id: ea2f9958-9efd-4632-97bc-3882963af1f3
	I1024 20:02:42.406939 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:42.406950 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:42.407114 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"465","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1024 20:02:42.904188 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:42.904221 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:42.904231 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:42.904238 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:42.906694 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:42.906719 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:42.906728 1181050 round_trippers.go:580]     Audit-Id: 53e282aa-4c95-44eb-bc87-c331764b0c31
	I1024 20:02:42.906734 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:42.906741 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:42.906747 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:42.906755 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:42.906762 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:42 GMT
	I1024 20:02:42.906883 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:42.907259 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:43.403950 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:43.403969 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:43.403979 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:43.403986 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:43.406346 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:43.406366 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:43.406375 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:43.406381 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:43.406397 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:43.406407 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:43.406413 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:43 GMT
	I1024 20:02:43.406419 1181050 round_trippers.go:580]     Audit-Id: e0a30b03-e919-448a-b9a8-16054e86e84a
	I1024 20:02:43.406666 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:43.904808 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:43.904833 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:43.904843 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:43.904850 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:43.907262 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:43.907283 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:43.907292 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:43 GMT
	I1024 20:02:43.907299 1181050 round_trippers.go:580]     Audit-Id: 3ec2435e-b6b0-431d-b890-e6b4ef4354f5
	I1024 20:02:43.907305 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:43.907312 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:43.907320 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:43.907328 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:43.907558 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:44.404396 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:44.404421 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:44.404432 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:44.404439 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:44.406913 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:44.406934 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:44.406942 1181050 round_trippers.go:580]     Audit-Id: a388e525-3309-4f58-844f-2eb410f4284e
	I1024 20:02:44.406949 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:44.406956 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:44.406962 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:44.406974 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:44.406981 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:44 GMT
	I1024 20:02:44.407169 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:44.904047 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:44.904069 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:44.904079 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:44.904087 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:44.907381 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:02:44.907406 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:44.907415 1181050 round_trippers.go:580]     Audit-Id: 7c63ec12-d766-469a-b2c8-c5b301419e6c
	I1024 20:02:44.907422 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:44.907429 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:44.907435 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:44.907442 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:44.907449 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:44 GMT
	I1024 20:02:44.907678 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:44.908048 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:45.404822 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:45.404846 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:45.404856 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:45.404863 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:45.407382 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:45.407402 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:45.407410 1181050 round_trippers.go:580]     Audit-Id: 54a128d7-aa3c-44d5-980d-b0cfbe85135b
	I1024 20:02:45.407417 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:45.407427 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:45.407433 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:45.407440 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:45.407446 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:45 GMT
	I1024 20:02:45.407613 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:45.904775 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:45.904800 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:45.904811 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:45.904818 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:45.907375 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:45.907393 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:45.907402 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:45.907408 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:45.907414 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:45.907421 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:45 GMT
	I1024 20:02:45.907427 1181050 round_trippers.go:580]     Audit-Id: 2214392c-5410-433c-b642-0492c1f94144
	I1024 20:02:45.907433 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:45.907544 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:46.404514 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:46.404539 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:46.404549 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:46.404557 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:46.407067 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:46.407093 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:46.407102 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:46 GMT
	I1024 20:02:46.407109 1181050 round_trippers.go:580]     Audit-Id: f605503c-e370-425a-ac5a-2d11568d2214
	I1024 20:02:46.407115 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:46.407120 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:46.407127 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:46.407133 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:46.407402 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:46.904095 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:46.904160 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:46.904176 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:46.904183 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:46.906611 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:46.906634 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:46.906643 1181050 round_trippers.go:580]     Audit-Id: 0026162b-2a29-4da5-9eeb-56dd2b98a697
	I1024 20:02:46.906649 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:46.906655 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:46.906661 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:46.906673 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:46.906680 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:46 GMT
	I1024 20:02:46.906843 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:47.404962 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:47.404994 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:47.405004 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:47.405011 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:47.407625 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:47.407689 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:47.407703 1181050 round_trippers.go:580]     Audit-Id: f97b63c2-8f1d-41a9-90c9-37819293f003
	I1024 20:02:47.407711 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:47.407717 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:47.407723 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:47.407730 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:47.407740 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:47 GMT
	I1024 20:02:47.407833 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:47.408217 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:47.904163 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:47.904185 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:47.904195 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:47.904202 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:47.906614 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:47.906636 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:47.906645 1181050 round_trippers.go:580]     Audit-Id: 86875e22-635b-4d93-8c8e-7fd1520184c2
	I1024 20:02:47.906652 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:47.906658 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:47.906664 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:47.906670 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:47.906681 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:47 GMT
	I1024 20:02:47.906803 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:48.404899 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:48.404921 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:48.404931 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:48.404938 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:48.407497 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:48.407518 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:48.407527 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:48 GMT
	I1024 20:02:48.407533 1181050 round_trippers.go:580]     Audit-Id: 058229cd-b30b-49ab-a69c-20268ce30d4c
	I1024 20:02:48.407539 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:48.407546 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:48.407552 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:48.407558 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:48.407670 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:48.904754 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:48.904778 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:48.904789 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:48.904797 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:48.907267 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:48.907292 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:48.907300 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:48.907307 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:48.907313 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:48.907320 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:48 GMT
	I1024 20:02:48.907326 1181050 round_trippers.go:580]     Audit-Id: 3672ec38-14e5-47a0-b16b-02ed7379b0e9
	I1024 20:02:48.907333 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:48.907459 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:49.404579 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:49.404601 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:49.404610 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:49.404617 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:49.407082 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:49.407106 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:49.407114 1181050 round_trippers.go:580]     Audit-Id: 2054db76-f196-4d55-ac71-e8e3417f5869
	I1024 20:02:49.407120 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:49.407126 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:49.407133 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:49.407139 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:49.407146 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:49 GMT
	I1024 20:02:49.407269 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:49.904123 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:49.904146 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:49.904156 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:49.904165 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:49.906780 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:49.906804 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:49.906813 1181050 round_trippers.go:580]     Audit-Id: 5b403520-3072-4d0a-a643-f2369b85ee11
	I1024 20:02:49.906820 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:49.906826 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:49.906832 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:49.906838 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:49.906848 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:49 GMT
	I1024 20:02:49.907207 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:49.907646 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:50.404000 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:50.404023 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:50.404033 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:50.404042 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:50.406554 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:50.406573 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:50.406582 1181050 round_trippers.go:580]     Audit-Id: f9f50294-abca-4767-84bd-9486e078082d
	I1024 20:02:50.406588 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:50.406594 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:50.406600 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:50.406607 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:50.406613 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:50 GMT
	I1024 20:02:50.406730 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:50.904539 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:50.904563 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:50.904573 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:50.904580 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:50.906920 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:50.906941 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:50.906950 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:50.906961 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:50.906975 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:50 GMT
	I1024 20:02:50.906985 1181050 round_trippers.go:580]     Audit-Id: 44e02234-2268-4b2d-8c54-1f3d0fe1f536
	I1024 20:02:50.906992 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:50.907001 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:50.907419 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:51.404820 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:51.404843 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:51.404853 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:51.404860 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:51.407346 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:51.407367 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:51.407375 1181050 round_trippers.go:580]     Audit-Id: 031f776a-6fc8-4958-b942-65a819643c9f
	I1024 20:02:51.407381 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:51.407388 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:51.407394 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:51.407400 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:51.407411 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:51 GMT
	I1024 20:02:51.407636 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:51.904664 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:51.904685 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:51.904695 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:51.904703 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:51.907142 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:51.907165 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:51.907173 1181050 round_trippers.go:580]     Audit-Id: f528848d-c48f-49dc-8c01-b59e6ef68bf4
	I1024 20:02:51.907179 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:51.907186 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:51.907192 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:51.907199 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:51.907208 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:51 GMT
	I1024 20:02:51.907460 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:51.907839 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:52.404705 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:52.404730 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:52.404741 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:52.404748 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:52.407318 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:52.407342 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:52.407350 1181050 round_trippers.go:580]     Audit-Id: 16043e6b-7431-4119-93b7-b20d2dd77862
	I1024 20:02:52.407357 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:52.407363 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:52.407369 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:52.407376 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:52.407386 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:52 GMT
	I1024 20:02:52.407651 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:52.904041 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:52.904065 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:52.904077 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:52.904085 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:52.906867 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:52.906887 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:52.906899 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:52 GMT
	I1024 20:02:52.906906 1181050 round_trippers.go:580]     Audit-Id: 04f9c460-8f32-4878-b06c-5f664cb58afe
	I1024 20:02:52.906915 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:52.906921 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:52.906927 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:52.906933 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:52.907298 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:53.404247 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:53.404270 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:53.404281 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:53.404289 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:53.406820 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:53.406844 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:53.406853 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:53.406861 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:53.406867 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:53.406873 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:53 GMT
	I1024 20:02:53.406879 1181050 round_trippers.go:580]     Audit-Id: d18fdae4-4bad-402b-b9e4-d1b3f6244370
	I1024 20:02:53.406886 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:53.407155 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:53.904261 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:53.904284 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:53.904294 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:53.904302 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:53.906963 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:53.906993 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:53.907001 1181050 round_trippers.go:580]     Audit-Id: d0af5967-f6bb-4a69-8095-071818033008
	I1024 20:02:53.907008 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:53.907014 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:53.907020 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:53.907028 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:53.907035 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:53 GMT
	I1024 20:02:53.907165 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:54.404056 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:54.404077 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:54.404087 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:54.404094 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:54.406611 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:54.406636 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:54.406645 1181050 round_trippers.go:580]     Audit-Id: 3e0ba65c-65bc-4f44-a20a-db13ffa7a19d
	I1024 20:02:54.406652 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:54.406659 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:54.406666 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:54.406676 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:54.406685 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:54 GMT
	I1024 20:02:54.406880 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:54.407257 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:54.904971 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:54.904993 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:54.905003 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:54.905010 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:54.907395 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:54.907413 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:54.907422 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:54.907428 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:54.907434 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:54.907441 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:54.907451 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:54 GMT
	I1024 20:02:54.907457 1181050 round_trippers.go:580]     Audit-Id: 14aa2195-cd34-465e-bd59-0eaf8121379b
	I1024 20:02:54.907720 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:55.404358 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:55.404384 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:55.404394 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:55.404401 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:55.406857 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:55.406878 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:55.406888 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:55.406894 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:55.406900 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:55.406906 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:55.406913 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:55 GMT
	I1024 20:02:55.406928 1181050 round_trippers.go:580]     Audit-Id: a7255423-4a0f-46af-aa93-65502eb50428
	I1024 20:02:55.407200 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:55.904809 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:55.904832 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:55.904842 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:55.904849 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:55.907246 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:55.907266 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:55.907274 1181050 round_trippers.go:580]     Audit-Id: 2831164f-933d-4bfe-a84d-295aa348ea22
	I1024 20:02:55.907281 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:55.907287 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:55.907293 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:55.907300 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:55.907307 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:55 GMT
	I1024 20:02:55.907464 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:56.404021 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:56.404043 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:56.404054 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:56.404061 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:56.406542 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:56.406562 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:56.406570 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:56.406576 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:56.406583 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:56.406589 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:56.406597 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:56 GMT
	I1024 20:02:56.406603 1181050 round_trippers.go:580]     Audit-Id: 84906cd2-552c-4627-bff2-b9d328356e2c
	I1024 20:02:56.406703 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:56.904729 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:56.904751 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:56.904761 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:56.904768 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:56.907267 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:56.907286 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:56.907294 1181050 round_trippers.go:580]     Audit-Id: 5de9d640-8add-45ee-95bc-499d0927125f
	I1024 20:02:56.907301 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:56.907307 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:56.907313 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:56.907320 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:56.907326 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:56 GMT
	I1024 20:02:56.907424 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:56.907790 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:57.403973 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:57.403997 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:57.404008 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:57.404015 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:57.406435 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:57.406452 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:57.406466 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:57.406472 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:57.406478 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:57 GMT
	I1024 20:02:57.406486 1181050 round_trippers.go:580]     Audit-Id: 05f33056-f7f9-449c-a24c-917ff4624ae3
	I1024 20:02:57.406496 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:57.406502 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:57.406605 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:57.904689 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:57.904713 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:57.904723 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:57.904731 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:57.907193 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:57.907216 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:57.907225 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:57 GMT
	I1024 20:02:57.907231 1181050 round_trippers.go:580]     Audit-Id: 4b2a1a5c-e1cf-40ca-86a5-ddd2ea649bcf
	I1024 20:02:57.907237 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:57.907244 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:57.907250 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:57.907256 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:57.907525 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:58.404712 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:58.404734 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:58.404745 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:58.404752 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:58.407284 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:58.407310 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:58.407318 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:58.407325 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:58.407332 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:58.407339 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:58.407345 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:58 GMT
	I1024 20:02:58.407351 1181050 round_trippers.go:580]     Audit-Id: 3205d157-0163-42c6-977f-1fa7dce818f0
	I1024 20:02:58.407513 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:58.904714 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:58.904735 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:58.904745 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:58.904758 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:58.907246 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:58.907266 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:58.907274 1181050 round_trippers.go:580]     Audit-Id: 5e0a21a7-76a0-4390-8e6f-f21edf5caa4e
	I1024 20:02:58.907281 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:58.907287 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:58.907293 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:58.907299 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:58.907305 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:58 GMT
	I1024 20:02:58.907433 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:59.404000 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:59.404023 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:59.404033 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:59.404041 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:59.406736 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:59.406759 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:59.406768 1181050 round_trippers.go:580]     Audit-Id: 86323480-baeb-4d51-968c-e595da589e34
	I1024 20:02:59.406774 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:59.406780 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:59.406788 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:59.406794 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:59.406801 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:59 GMT
	I1024 20:02:59.406997 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:02:59.407364 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:02:59.904050 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:02:59.904075 1181050 round_trippers.go:469] Request Headers:
	I1024 20:02:59.904085 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:02:59.904093 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:02:59.906741 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:02:59.906762 1181050 round_trippers.go:577] Response Headers:
	I1024 20:02:59.906771 1181050 round_trippers.go:580]     Audit-Id: 52fe542f-c76c-433b-bd39-45a21e247573
	I1024 20:02:59.906777 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:02:59.906784 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:02:59.906790 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:02:59.906797 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:02:59.906803 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:02:59 GMT
	I1024 20:02:59.906910 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:00.404022 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:00.404048 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:00.404059 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:00.404068 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:00.406954 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:00.406997 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:00.407007 1181050 round_trippers.go:580]     Audit-Id: 21169629-c3b9-48f5-9afa-47a019640791
	I1024 20:03:00.407014 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:00.407020 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:00.407026 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:00.407035 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:00.407060 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:00 GMT
	I1024 20:03:00.407179 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:00.905005 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:00.905029 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:00.905038 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:00.905045 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:00.907554 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:00.907577 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:00.907587 1181050 round_trippers.go:580]     Audit-Id: 368c0649-850c-4921-89d4-5fd422e02fb0
	I1024 20:03:00.907594 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:00.907600 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:00.907606 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:00.907612 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:00.907619 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:00 GMT
	I1024 20:03:00.907939 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:01.404812 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:01.404883 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:01.404922 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:01.404949 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:01.407620 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:01.407638 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:01.407648 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:01.407655 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:01.407662 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:01 GMT
	I1024 20:03:01.407668 1181050 round_trippers.go:580]     Audit-Id: 96736497-074a-48b3-be29-dc2b1818c1ae
	I1024 20:03:01.407674 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:01.407681 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:01.407785 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:01.408144 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:03:01.903952 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:01.903976 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:01.903987 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:01.903994 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:01.906634 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:01.906659 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:01.906667 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:01.906674 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:01.906680 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:01 GMT
	I1024 20:03:01.906687 1181050 round_trippers.go:580]     Audit-Id: 55cc8697-e227-4327-a2c6-995d01cfd6e1
	I1024 20:03:01.906700 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:01.906708 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:01.906832 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:02.404304 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:02.404330 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:02.404340 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:02.404349 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:02.406846 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:02.406872 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:02.406881 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:02.406889 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:02 GMT
	I1024 20:03:02.406895 1181050 round_trippers.go:580]     Audit-Id: 85fac633-1b1d-40c7-98b9-44471342802a
	I1024 20:03:02.406901 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:02.406908 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:02.406914 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:02.407436 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:02.904506 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:02.904531 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:02.904541 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:02.904548 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:02.906954 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:02.906988 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:02.906997 1181050 round_trippers.go:580]     Audit-Id: f7c94547-de3b-49d0-b37b-70ac9f654c06
	I1024 20:03:02.907003 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:02.907009 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:02.907015 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:02.907021 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:02.907027 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:02 GMT
	I1024 20:03:02.907172 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:03.404017 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:03.404042 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:03.404052 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:03.404060 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:03.406622 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:03.406654 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:03.406662 1181050 round_trippers.go:580]     Audit-Id: 1ec990a4-d7b9-49bd-ae23-c38200c112a6
	I1024 20:03:03.406671 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:03.406677 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:03.406683 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:03.406689 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:03.406695 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:03 GMT
	I1024 20:03:03.406881 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:03.904975 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:03.905017 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:03.905028 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:03.905035 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:03.907518 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:03.907543 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:03.907551 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:03.907558 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:03.907565 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:03 GMT
	I1024 20:03:03.907575 1181050 round_trippers.go:580]     Audit-Id: fa82d3b0-b9a8-47e2-b169-17d3b14660c0
	I1024 20:03:03.907582 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:03.907588 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:03.907777 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"473","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I1024 20:03:03.908152 1181050 node_ready.go:58] node "multinode-773966-m02" has status "Ready":"False"
	I1024 20:03:04.404289 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:04.404311 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.404321 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.404328 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.406905 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.406929 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.406937 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.406944 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.406950 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.406956 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.406962 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.406969 1181050 round_trippers.go:580]     Audit-Id: f3ef62b2-d79b-486f-b421-89a01defad46
	I1024 20:03:04.407132 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"495","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1024 20:03:04.407507 1181050 node_ready.go:49] node "multinode-773966-m02" has status "Ready":"True"
	I1024 20:03:04.407525 1181050 node_ready.go:38] duration metric: took 31.010605823s waiting for node "multinode-773966-m02" to be "Ready" ...
	I1024 20:03:04.407540 1181050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:03:04.407608 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1024 20:03:04.407616 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.407624 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.407631 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.411079 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:03:04.411134 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.411149 1181050 round_trippers.go:580]     Audit-Id: c25c9403-63bf-418b-a0e8-7256d9c3d16a
	I1024 20:03:04.411157 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.411163 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.411169 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.411176 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.411182 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.412221 1181050 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"407","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1024 20:03:04.415083 1181050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.415175 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xxljp
	I1024 20:03:04.415189 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.415199 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.415210 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.417691 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.417713 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.417722 1181050 round_trippers.go:580]     Audit-Id: 7fe39ab2-c6a7-4357-a8bf-3a6553ff2dd2
	I1024 20:03:04.417729 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.417760 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.417768 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.417777 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.417784 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.418199 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xxljp","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43","resourceVersion":"407","creationTimestamp":"2023-10-24T20:01:42Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"6ed9f91d-8cbe-4297-8871-667f3885b58f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6ed9f91d-8cbe-4297-8871-667f3885b58f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1024 20:03:04.418701 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:04.418719 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.418728 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.418735 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.420966 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.420983 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.420991 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.420998 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.421004 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.421010 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.421016 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.421023 1181050 round_trippers.go:580]     Audit-Id: beeb3127-327b-4413-b961-004098abf9d3
	I1024 20:03:04.421219 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:04.421655 1181050 pod_ready.go:92] pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:04.421667 1181050 pod_ready.go:81] duration metric: took 6.5583ms waiting for pod "coredns-5dd5756b68-xxljp" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.421676 1181050 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.421758 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-773966
	I1024 20:03:04.421764 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.421772 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.421778 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.424136 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.424151 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.424159 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.424165 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.424171 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.424178 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.424185 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.424191 1181050 round_trippers.go:580]     Audit-Id: f244e92f-206c-4651-8ca9-7d65ac6b748c
	I1024 20:03:04.424322 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-773966","namespace":"kube-system","uid":"6d702ec5-2b3a-460f-83bd-afe267c6e11a","resourceVersion":"380","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"883e2738bfd207cffca852790a091db1","kubernetes.io/config.mirror":"883e2738bfd207cffca852790a091db1","kubernetes.io/config.seen":"2023-10-24T20:01:29.175728694Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1024 20:03:04.424795 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:04.424814 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.424822 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.424831 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.427066 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.427086 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.427094 1181050 round_trippers.go:580]     Audit-Id: f29bc4cc-eec5-4c3a-aff8-8556aaf18726
	I1024 20:03:04.427100 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.427107 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.427114 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.427120 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.427130 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.427317 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:04.427686 1181050 pod_ready.go:92] pod "etcd-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:04.427703 1181050 pod_ready.go:81] duration metric: took 6.019922ms waiting for pod "etcd-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.427720 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.427784 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-773966
	I1024 20:03:04.427795 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.427802 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.427809 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.430125 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.430145 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.430156 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.430163 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.430169 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.430176 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.430185 1181050 round_trippers.go:580]     Audit-Id: 04b35f28-a363-4920-9f18-852fcb5b2532
	I1024 20:03:04.430196 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.430568 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-773966","namespace":"kube-system","uid":"b2bdeafa-2435-4a3a-ac17-6ce1c060ac88","resourceVersion":"381","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e97211f0bb5112c2116bdaec5410f7ba","kubernetes.io/config.mirror":"e97211f0bb5112c2116bdaec5410f7ba","kubernetes.io/config.seen":"2023-10-24T20:01:29.175734093Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1024 20:03:04.431075 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:04.431090 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.431099 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.431107 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.433316 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.433373 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.433388 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.433395 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.433402 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.433408 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.433415 1181050 round_trippers.go:580]     Audit-Id: b4a1a9fa-e30f-4162-8d40-d873d8a78e83
	I1024 20:03:04.433425 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.433519 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:04.433917 1181050 pod_ready.go:92] pod "kube-apiserver-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:04.433936 1181050 pod_ready.go:81] duration metric: took 6.201148ms waiting for pod "kube-apiserver-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.433947 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.434005 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-773966
	I1024 20:03:04.434016 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.434023 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.434030 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.436303 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.436326 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.436335 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.436341 1181050 round_trippers.go:580]     Audit-Id: c4c87a09-bf68-40c7-9d83-7ec31eb16dd5
	I1024 20:03:04.436347 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.436354 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.436362 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.436374 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.436505 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-773966","namespace":"kube-system","uid":"36ab85e7-0c8e-4da4-940a-428d743184e0","resourceVersion":"310","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"95f71f86968dd4700c51541369b0c606","kubernetes.io/config.mirror":"95f71f86968dd4700c51541369b0c606","kubernetes.io/config.seen":"2023-10-24T20:01:29.175735496Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1024 20:03:04.437013 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:04.437026 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.437034 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.437041 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.439487 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.439511 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.439519 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.439525 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.439531 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.439538 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.439544 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.439550 1181050 round_trippers.go:580]     Audit-Id: 1d3e2a0f-5d4c-4297-8e5e-462ecddfa4dc
	I1024 20:03:04.439654 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:04.440028 1181050 pod_ready.go:92] pod "kube-controller-manager-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:04.440046 1181050 pod_ready.go:81] duration metric: took 6.091914ms waiting for pod "kube-controller-manager-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.440059 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmljn" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.604346 1181050 request.go:629] Waited for 164.22199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cmljn
	I1024 20:03:04.604445 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cmljn
	I1024 20:03:04.604456 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.604465 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.604476 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.607054 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.607122 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.607137 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.607148 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.607155 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.607170 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.607177 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.607198 1181050 round_trippers.go:580]     Audit-Id: 9981fb57-6341-475f-bcc3-3ca26a28b2ef
	I1024 20:03:04.607607 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cmljn","generateName":"kube-proxy-","namespace":"kube-system","uid":"36db5775-a462-4f71-bd0f-ac2a79b5ab45","resourceVersion":"459","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"438118bc-681e-453e-be1a-d33418e8630d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"438118bc-681e-453e-be1a-d33418e8630d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1024 20:03:04.804373 1181050 request.go:629] Waited for 196.275268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:04.804452 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966-m02
	I1024 20:03:04.804475 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:04.804491 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:04.804500 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:04.807100 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:04.807213 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:04.807253 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:04.807278 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:04 GMT
	I1024 20:03:04.807297 1181050 round_trippers.go:580]     Audit-Id: 1f5acf67-3277-4a4a-b2a5-13db99ff4816
	I1024 20:03:04.807329 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:04.807353 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:04.807364 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:04.807480 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966-m02","uid":"49fca159-2f39-4962-a2e5-3821216465ab","resourceVersion":"495","creationTimestamp":"2023-10-24T20:02:32Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:02:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1024 20:03:04.807879 1181050 pod_ready.go:92] pod "kube-proxy-cmljn" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:04.807916 1181050 pod_ready.go:81] duration metric: took 367.849088ms waiting for pod "kube-proxy-cmljn" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:04.807935 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsvnn" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:05.004446 1181050 request.go:629] Waited for 196.364777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsvnn
	I1024 20:03:05.004571 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jsvnn
	I1024 20:03:05.004614 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:05.004659 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:05.004692 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:05.007901 1181050 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1024 20:03:05.007977 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:05.008000 1181050 round_trippers.go:580]     Audit-Id: b29253d5-0476-44ba-89bf-559cf1c4b675
	I1024 20:03:05.008020 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:05.008052 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:05.008075 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:05.008090 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:05.008097 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:05 GMT
	I1024 20:03:05.008271 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsvnn","generateName":"kube-proxy-","namespace":"kube-system","uid":"99e468ec-c444-4fbf-8a1c-97bd7c654075","resourceVersion":"374","creationTimestamp":"2023-10-24T20:01:41Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"438118bc-681e-453e-be1a-d33418e8630d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"438118bc-681e-453e-be1a-d33418e8630d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1024 20:03:05.205107 1181050 request.go:629] Waited for 196.311542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:05.205237 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:05.205250 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:05.205260 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:05.205267 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:05.207886 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:05.207961 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:05.208022 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:05.208044 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:05.208061 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:05.208069 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:05 GMT
	I1024 20:03:05.208076 1181050 round_trippers.go:580]     Audit-Id: f0b37f98-a56e-40b5-81d1-be2b63ab7304
	I1024 20:03:05.208082 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:05.208214 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:05.208645 1181050 pod_ready.go:92] pod "kube-proxy-jsvnn" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:05.208669 1181050 pod_ready.go:81] duration metric: took 400.727109ms waiting for pod "kube-proxy-jsvnn" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:05.208680 1181050 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:05.405094 1181050 request.go:629] Waited for 196.330135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773966
	I1024 20:03:05.405155 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-773966
	I1024 20:03:05.405160 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:05.405170 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:05.405203 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:05.407698 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:05.407766 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:05.407787 1181050 round_trippers.go:580]     Audit-Id: 34398147-5a50-425d-85f6-eab2edeeca85
	I1024 20:03:05.407805 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:05.407862 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:05.407891 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:05.407904 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:05.407910 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:05 GMT
	I1024 20:03:05.408010 1181050 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-773966","namespace":"kube-system","uid":"0c4eebae-6ace-4cee-ba2c-72360a106163","resourceVersion":"379","creationTimestamp":"2023-10-24T20:01:29Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"daf0428413c67a76aa8986cb2e700828","kubernetes.io/config.mirror":"daf0428413c67a76aa8986cb2e700828","kubernetes.io/config.seen":"2023-10-24T20:01:29.175736800Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-24T20:01:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1024 20:03:05.604786 1181050 request.go:629] Waited for 196.33409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:05.604848 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-773966
	I1024 20:03:05.604858 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:05.604885 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:05.604897 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:05.607460 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:05.607533 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:05.607549 1181050 round_trippers.go:580]     Audit-Id: 3bf272b0-f22d-4e3d-9a16-64711504e728
	I1024 20:03:05.607556 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:05.607562 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:05.607569 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:05.607576 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:05.607582 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:05 GMT
	I1024 20:03:05.607710 1181050 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-24T20:01:25Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1024 20:03:05.608102 1181050 pod_ready.go:92] pod "kube-scheduler-multinode-773966" in "kube-system" namespace has status "Ready":"True"
	I1024 20:03:05.608118 1181050 pod_ready.go:81] duration metric: took 399.42766ms waiting for pod "kube-scheduler-multinode-773966" in "kube-system" namespace to be "Ready" ...
	I1024 20:03:05.608131 1181050 pod_ready.go:38] duration metric: took 1.200576705s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 20:03:05.608159 1181050 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:03:05.608236 1181050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:03:05.622118 1181050 system_svc.go:56] duration metric: took 13.960189ms WaitForService to wait for kubelet.
	I1024 20:03:05.622188 1181050 kubeadm.go:581] duration metric: took 32.253458345s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:03:05.622218 1181050 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:03:05.804607 1181050 request.go:629] Waited for 182.300661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1024 20:03:05.804667 1181050 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1024 20:03:05.804677 1181050 round_trippers.go:469] Request Headers:
	I1024 20:03:05.804686 1181050 round_trippers.go:473]     Accept: application/json, */*
	I1024 20:03:05.804696 1181050 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1024 20:03:05.807277 1181050 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1024 20:03:05.807337 1181050 round_trippers.go:577] Response Headers:
	I1024 20:03:05.807351 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1fa62413-de88-49e8-8d18-fbdc4ed0b0ca
	I1024 20:03:05.807358 1181050 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 7f08fe08-0b2d-4b6d-9607-83e08e43842d
	I1024 20:03:05.807370 1181050 round_trippers.go:580]     Date: Tue, 24 Oct 2023 20:03:05 GMT
	I1024 20:03:05.807377 1181050 round_trippers.go:580]     Audit-Id: 5d5ce255-d275-40a2-baae-eee59cd1b61d
	I1024 20:03:05.807394 1181050 round_trippers.go:580]     Cache-Control: no-cache, private
	I1024 20:03:05.807404 1181050 round_trippers.go:580]     Content-Type: application/json
	I1024 20:03:05.807570 1181050 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"496"},"items":[{"metadata":{"name":"multinode-773966","uid":"deb018d9-2740-4f91-ac40-a4037b9840a0","resourceVersion":"391","creationTimestamp":"2023-10-24T20:01:26Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-773966","kubernetes.io/os":"linux","minikube.k8s.io/commit":"88664ba50fde9b6a83229504adac261395c47fca","minikube.k8s.io/name":"multinode-773966","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_24T20_01_30_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1024 20:03:05.808187 1181050 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 20:03:05.808205 1181050 node_conditions.go:123] node cpu capacity is 2
	I1024 20:03:05.808214 1181050 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1024 20:03:05.808220 1181050 node_conditions.go:123] node cpu capacity is 2
	I1024 20:03:05.808229 1181050 node_conditions.go:105] duration metric: took 186.00568ms to run NodePressure ...
	I1024 20:03:05.808242 1181050 start.go:228] waiting for startup goroutines ...
	I1024 20:03:05.808274 1181050 start.go:242] writing updated cluster config ...
	I1024 20:03:05.808581 1181050 ssh_runner.go:195] Run: rm -f paused
	I1024 20:03:05.870064 1181050 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 20:03:05.873976 1181050 out.go:177] * Done! kubectl is now configured to use "multinode-773966" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 24 20:02:14 multinode-773966 crio[894]: time="2023-10-24 20:02:14.092340424Z" level=info msg="Starting container: 7c7c3b10e5dc591403545705e2d2110bf10f844a934c43cfde04aab2fd58b242" id=b7d277fa-f3e7-4431-a8ae-e021a84e8f86 name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 20:02:14 multinode-773966 crio[894]: time="2023-10-24 20:02:14.097536818Z" level=info msg="Created container 4f60fe3312e68cee0a79bdae30797bdac4a427a059cd8c662b13a503dae1481c: kube-system/coredns-5dd5756b68-xxljp/coredns" id=80ef525f-30de-42c5-b196-28c22a6b20e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 20:02:14 multinode-773966 crio[894]: time="2023-10-24 20:02:14.098313225Z" level=info msg="Starting container: 4f60fe3312e68cee0a79bdae30797bdac4a427a059cd8c662b13a503dae1481c" id=ed52ea82-73cd-475f-bb6d-eacad3526ef2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 20:02:14 multinode-773966 crio[894]: time="2023-10-24 20:02:14.105613840Z" level=info msg="Started container" PID=1922 containerID=7c7c3b10e5dc591403545705e2d2110bf10f844a934c43cfde04aab2fd58b242 description=kube-system/storage-provisioner/storage-provisioner id=b7d277fa-f3e7-4431-a8ae-e021a84e8f86 name=/runtime.v1.RuntimeService/StartContainer sandboxID=abd2497e349909dd1f6b4115c531874353cfb85a1a427a6fcdf14595e99d7a87
	Oct 24 20:02:14 multinode-773966 crio[894]: time="2023-10-24 20:02:14.114279093Z" level=info msg="Started container" PID=1927 containerID=4f60fe3312e68cee0a79bdae30797bdac4a427a059cd8c662b13a503dae1481c description=kube-system/coredns-5dd5756b68-xxljp/coredns id=ed52ea82-73cd-475f-bb6d-eacad3526ef2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ddc5e80dd61edbfd8caeaeb4517baeb538aba1ae5a42f12df3ed8397e1ebb70
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.123499680Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-c622k/POD" id=d740dfc7-3f15-4303-8376-fd76da2eb3b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.123560299Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.158219252Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-c622k Namespace:default ID:6c81036e9df120165c74d953c4904ab92a4099afd4094ff6ef6b0176eabe72fe UID:bd8f56c1-c8c3-48f0-b541-38d8ebf577e0 NetNS:/var/run/netns/946d8e64-852d-40c5-8a68-098afcc5b76a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.158516392Z" level=info msg="Adding pod default_busybox-5bc68d56bd-c622k to CNI network \"kindnet\" (type=ptp)"
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.168868899Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-c622k Namespace:default ID:6c81036e9df120165c74d953c4904ab92a4099afd4094ff6ef6b0176eabe72fe UID:bd8f56c1-c8c3-48f0-b541-38d8ebf577e0 NetNS:/var/run/netns/946d8e64-852d-40c5-8a68-098afcc5b76a Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.169023418Z" level=info msg="Checking pod default_busybox-5bc68d56bd-c622k for CNI network kindnet (type=ptp)"
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.172927312Z" level=info msg="Ran pod sandbox 6c81036e9df120165c74d953c4904ab92a4099afd4094ff6ef6b0176eabe72fe with infra container: default/busybox-5bc68d56bd-c622k/POD" id=d740dfc7-3f15-4303-8376-fd76da2eb3b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.175471728Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=a9fe7308-bce8-4720-b935-cd5018b37755 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.175697819Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=a9fe7308-bce8-4720-b935-cd5018b37755 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.176751674Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=592afbfe-b35f-49a4-bf61-7ba682a4c819 name=/runtime.v1.ImageService/PullImage
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.177958184Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 24 20:03:07 multinode-773966 crio[894]: time="2023-10-24 20:03:07.805645981Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.283497090Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=592afbfe-b35f-49a4-bf61-7ba682a4c819 name=/runtime.v1.ImageService/PullImage
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.284490924Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=290dbf17-933e-4bc2-a462-75589e852b58 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.285217206Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=290dbf17-933e-4bc2-a462-75589e852b58 name=/runtime.v1.ImageService/ImageStatus
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.286164468Z" level=info msg="Creating container: default/busybox-5bc68d56bd-c622k/busybox" id=4928328a-7b76-44ca-a6de-b47decb4bdaf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.286373632Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.375785881Z" level=info msg="Created container d0fae48ed55160432099afc63cd587069a69134e250b941703005f45e06b35b3: default/busybox-5bc68d56bd-c622k/busybox" id=4928328a-7b76-44ca-a6de-b47decb4bdaf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.376517644Z" level=info msg="Starting container: d0fae48ed55160432099afc63cd587069a69134e250b941703005f45e06b35b3" id=0d0b73e2-14ae-4897-b322-9d27ab7b171f name=/runtime.v1.RuntimeService/StartContainer
	Oct 24 20:03:10 multinode-773966 crio[894]: time="2023-10-24 20:03:10.387017204Z" level=info msg="Started container" PID=2068 containerID=d0fae48ed55160432099afc63cd587069a69134e250b941703005f45e06b35b3 description=default/busybox-5bc68d56bd-c622k/busybox id=0d0b73e2-14ae-4897-b322-9d27ab7b171f name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c81036e9df120165c74d953c4904ab92a4099afd4094ff6ef6b0176eabe72fe
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d0fae48ed5516       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   6c81036e9df12       busybox-5bc68d56bd-c622k
	4f60fe3312e68       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   7ddc5e80dd61e       coredns-5dd5756b68-xxljp
	7c7c3b10e5dc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   abd2497e34990       storage-provisioner
	c7c9f7bcbd2a6       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   c5cd5fe86b1b0       kindnet-drz9j
	0b507753604aa       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                      About a minute ago   Running             kube-proxy                0                   b7cf2e8559253       kube-proxy-jsvnn
	90ad1d9aadd4e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   a797eb358199a       etcd-multinode-773966
	a4657e40ce5e0       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                      About a minute ago   Running             kube-apiserver            0                   43207bfb53e45       kube-apiserver-multinode-773966
	4fc5bd0d1bff9       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                      About a minute ago   Running             kube-controller-manager   0                   a3add5cf2f6a5       kube-controller-manager-multinode-773966
	1cbe37ffed0b7       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                      About a minute ago   Running             kube-scheduler            0                   47d44325ac7de       kube-scheduler-multinode-773966
	
	* 
	* ==> coredns [4f60fe3312e68cee0a79bdae30797bdac4a427a059cd8c662b13a503dae1481c] <==
	* [INFO] 10.244.0.3:45989 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012196s
	[INFO] 10.244.1.2:39091 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179528s
	[INFO] 10.244.1.2:39245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001151683s
	[INFO] 10.244.1.2:50609 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097953s
	[INFO] 10.244.1.2:37648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007474s
	[INFO] 10.244.1.2:43775 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001427834s
	[INFO] 10.244.1.2:33327 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008105s
	[INFO] 10.244.1.2:34752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078949s
	[INFO] 10.244.1.2:47367 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000077891s
	[INFO] 10.244.0.3:36582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137132s
	[INFO] 10.244.0.3:56081 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000095269s
	[INFO] 10.244.0.3:54048 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070096s
	[INFO] 10.244.0.3:50119 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085521s
	[INFO] 10.244.1.2:41471 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116537s
	[INFO] 10.244.1.2:44311 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070392s
	[INFO] 10.244.1.2:56417 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064082s
	[INFO] 10.244.1.2:59297 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087983s
	[INFO] 10.244.0.3:32962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118711s
	[INFO] 10.244.0.3:57381 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000093415s
	[INFO] 10.244.0.3:41501 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078047s
	[INFO] 10.244.0.3:48572 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076061s
	[INFO] 10.244.1.2:46606 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132176s
	[INFO] 10.244.1.2:42940 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00007255s
	[INFO] 10.244.1.2:56794 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070351s
	[INFO] 10.244.1.2:41539 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097165s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-773966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-773966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=multinode-773966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T20_01_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:01:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773966
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:03:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:02:13 +0000   Tue, 24 Oct 2023 20:01:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:02:13 +0000   Tue, 24 Oct 2023 20:01:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:02:13 +0000   Tue, 24 Oct 2023 20:01:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:02:13 +0000   Tue, 24 Oct 2023 20:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-773966
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 da97ccc6dc6147cbbe7db738e49785c4
	  System UUID:                bd14c800-18e5-40a9-b3ef-6b7a174a83f5
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-c622k                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-xxljp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     93s
	  kube-system                 etcd-multinode-773966                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-drz9j                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      94s
	  kube-system                 kube-apiserver-multinode-773966             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-controller-manager-multinode-773966    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-jsvnn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-multinode-773966             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 92s   kube-proxy       
	  Normal  Starting                 106s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s  kubelet          Node multinode-773966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s  kubelet          Node multinode-773966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s  kubelet          Node multinode-773966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           94s   node-controller  Node multinode-773966 event: Registered Node multinode-773966 in Controller
	  Normal  NodeReady                62s   kubelet          Node multinode-773966 status is now: NodeReady
	
	
	Name:               multinode-773966-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-773966-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 20:02:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-773966-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Oct 2023 20:03:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 20:03:04 +0000   Tue, 24 Oct 2023 20:02:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 20:03:04 +0000   Tue, 24 Oct 2023 20:02:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 20:03:04 +0000   Tue, 24 Oct 2023 20:02:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 20:03:04 +0000   Tue, 24 Oct 2023 20:03:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-773966-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 a965ae4452d24ce2925dcbb7f354f9f8
	  System UUID:                643398f7-1bb4-4616-8020-fa6480ae0387
	  Boot ID:                    f05db690-1143-478b-8d18-db062f271a9b
	  Kernel Version:             5.15.0-1048-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-wldjb    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 kindnet-kcxpk               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      44s
	  kube-system                 kube-proxy-cmljn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  NodeHasSufficientMemory  44s (x5 over 45s)  kubelet          Node multinode-773966-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    44s (x5 over 45s)  kubelet          Node multinode-773966-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     44s (x5 over 45s)  kubelet          Node multinode-773966-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node multinode-773966-m02 event: Registered Node multinode-773966-m02 in Controller
	  Normal  NodeReady                12s                kubelet          Node multinode-773966-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001163] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000725] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001044] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000080c1e564
	[  +0.001072] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +0.003112] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=000000003058710d
	[  +0.001176] FS-Cache: O-key=[8] '3a643b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000949] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=0000000010398763
	[  +0.001113] FS-Cache: N-key=[8] '3a643b0000000000'
	[  +3.176984] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000975] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000953f0312
	[  +0.001131] FS-Cache: O-key=[8] '39643b0000000000'
	[  +0.000732] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=00000000c4f274aa
	[  +0.001081] FS-Cache: N-key=[8] '39643b0000000000'
	[  +0.310132] FS-Cache: Duplicate cookie detected
	[  +0.000734] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000998] FS-Cache: O-cookie d=000000003cd4f259{9p.inode} n=00000000a06fabf2
	[  +0.001138] FS-Cache: O-key=[8] '3f643b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=000000003cd4f259{9p.inode} n=000000004c0f819e
	[  +0.001053] FS-Cache: N-key=[8] '3f643b0000000000'
	
	* 
	* ==> etcd [90ad1d9aadd4e9ec6cb2984236b9118222eb15ada8109ad9f474b7a0c26d0e3e] <==
	* {"level":"info","ts":"2023-10-24T20:01:22.588922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-24T20:01:22.589022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-24T20:01:22.593821Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-24T20:01:22.594004Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-24T20:01:22.594018Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-24T20:01:22.594871Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-24T20:01:22.594921Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-24T20:01:22.671491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-24T20:01:22.671588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-24T20:01:22.671604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-24T20:01:22.671627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-24T20:01:22.671634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-24T20:01:22.671644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-24T20:01:22.671653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-24T20:01:22.672779Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:01:22.673933Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-773966 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-24T20:01:22.673963Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:01:22.675197Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-24T20:01:22.675259Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-24T20:01:22.676155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-24T20:01:22.676822Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:01:22.676905Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:01:22.681793Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-24T20:01:22.684009Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-24T20:01:22.684035Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  20:03:16 up  9:45,  0 users,  load average: 1.08, 1.47, 1.17
	Linux multinode-773966 5.15.0-1048-aws #53~20.04.1-Ubuntu SMP Wed Oct 4 16:51:38 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [c7c9f7bcbd2a6920e5844ff4bab9bbcaf1896fce51b46cd3588f00c2a235be4b] <==
	* I1024 20:02:13.177677       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:02:13.177708       1 main.go:227] handling current node
	I1024 20:02:23.195318       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:02:23.195347       1 main.go:227] handling current node
	I1024 20:02:33.208081       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:02:33.208111       1 main.go:227] handling current node
	I1024 20:02:33.208122       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 20:02:33.208129       1 main.go:250] Node multinode-773966-m02 has CIDR [10.244.1.0/24] 
	I1024 20:02:33.208285       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1024 20:02:43.212837       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:02:43.212868       1 main.go:227] handling current node
	I1024 20:02:43.212880       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 20:02:43.212886       1 main.go:250] Node multinode-773966-m02 has CIDR [10.244.1.0/24] 
	I1024 20:02:53.224646       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:02:53.224672       1 main.go:227] handling current node
	I1024 20:02:53.224683       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 20:02:53.224688       1 main.go:250] Node multinode-773966-m02 has CIDR [10.244.1.0/24] 
	I1024 20:03:03.237524       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:03:03.237553       1 main.go:227] handling current node
	I1024 20:03:03.237563       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 20:03:03.237569       1 main.go:250] Node multinode-773966-m02 has CIDR [10.244.1.0/24] 
	I1024 20:03:13.249116       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1024 20:03:13.249147       1 main.go:227] handling current node
	I1024 20:03:13.249159       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1024 20:03:13.249166       1 main.go:250] Node multinode-773966-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [a4657e40ce5e0e784287a6a642839ca9276fe61b60bcd73794e0e8f4ff30cc96] <==
	* I1024 20:01:26.087700       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1024 20:01:26.087877       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1024 20:01:26.088050       1 aggregator.go:166] initial CRD sync complete...
	I1024 20:01:26.088068       1 autoregister_controller.go:141] Starting autoregister controller
	I1024 20:01:26.088075       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1024 20:01:26.088080       1 cache.go:39] Caches are synced for autoregister controller
	I1024 20:01:26.104964       1 controller.go:624] quota admission added evaluator for: namespaces
	I1024 20:01:26.115913       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1024 20:01:26.163519       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 20:01:26.165966       1 shared_informer.go:318] Caches are synced for configmaps
	I1024 20:01:26.887019       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1024 20:01:26.893225       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1024 20:01:26.893317       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1024 20:01:27.436067       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 20:01:27.470669       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1024 20:01:27.532056       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1024 20:01:27.538630       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1024 20:01:27.539729       1 controller.go:624] quota admission added evaluator for: endpoints
	I1024 20:01:27.543993       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1024 20:01:28.076364       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1024 20:01:29.071023       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1024 20:01:29.086684       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1024 20:01:29.100291       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1024 20:01:41.691935       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1024 20:01:41.910136       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [4fc5bd0d1bff9ea8411f9c1dc73e37a81aa25e9b8287e126bb1fdba304dbf9bf] <==
	* I1024 20:01:42.785625       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.979µs"
	I1024 20:02:13.621694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.121µs"
	I1024 20:02:13.637848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.013µs"
	I1024 20:02:14.407512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.334925ms"
	I1024 20:02:14.407595       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.695µs"
	I1024 20:02:16.082641       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1024 20:02:32.461345       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-773966-m02\" does not exist"
	I1024 20:02:32.480065       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-773966-m02" podCIDRs=["10.244.1.0/24"]
	I1024 20:02:32.485659       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-kcxpk"
	I1024 20:02:32.485685       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cmljn"
	I1024 20:02:36.086453       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-773966-m02"
	I1024 20:02:36.086468       1 event.go:307] "Event occurred" object="multinode-773966-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-773966-m02 event: Registered Node multinode-773966-m02 in Controller"
	I1024 20:03:04.080529       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-773966-m02"
	I1024 20:03:06.768442       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1024 20:03:06.776467       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-wldjb"
	I1024 20:03:06.798715       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-c622k"
	I1024 20:03:06.815568       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="50.581761ms"
	I1024 20:03:06.841553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="25.865726ms"
	I1024 20:03:06.841684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.335µs"
	I1024 20:03:06.844649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="38.096µs"
	I1024 20:03:06.861579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="40.968µs"
	I1024 20:03:10.481351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.146198ms"
	I1024 20:03:10.481418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.385µs"
	I1024 20:03:11.152667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.74883ms"
	I1024 20:03:11.152744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="31.303µs"
	
	* 
	* ==> kube-proxy [0b507753604aafc85e5cac214775f55c0573e21584221e09522b10d9bbb465a4] <==
	* I1024 20:01:42.985097       1 server_others.go:69] "Using iptables proxy"
	I1024 20:01:43.000737       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1024 20:01:43.078120       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1024 20:01:43.084397       1 server_others.go:152] "Using iptables Proxier"
	I1024 20:01:43.084596       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1024 20:01:43.084637       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1024 20:01:43.084690       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1024 20:01:43.084954       1 server.go:846] "Version info" version="v1.28.3"
	I1024 20:01:43.084964       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1024 20:01:43.088893       1 config.go:188] "Starting service config controller"
	I1024 20:01:43.089028       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1024 20:01:43.089097       1 config.go:97] "Starting endpoint slice config controller"
	I1024 20:01:43.089127       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1024 20:01:43.089798       1 config.go:315] "Starting node config controller"
	I1024 20:01:43.089877       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1024 20:01:43.189767       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1024 20:01:43.189772       1 shared_informer.go:318] Caches are synced for service config
	I1024 20:01:43.190125       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1cbe37ffed0b7b90f159ad28660b7798609f6619209733888181153ebfa5373f] <==
	* W1024 20:01:26.153813       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 20:01:26.156112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 20:01:26.153918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 20:01:26.156128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1024 20:01:26.153983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:01:26.156143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 20:01:26.154045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:01:26.156159       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 20:01:26.153506       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:01:26.156173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 20:01:27.001501       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1024 20:01:27.001617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1024 20:01:27.017223       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 20:01:27.017333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1024 20:01:27.046399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 20:01:27.046509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1024 20:01:27.128272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1024 20:01:27.128425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1024 20:01:27.149164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 20:01:27.149280       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1024 20:01:27.152397       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 20:01:27.152521       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1024 20:01:27.444308       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1024 20:01:27.444344       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1024 20:01:30.427466       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.803113    1384 topology_manager.go:215] "Topology Admit Handler" podUID="99e468ec-c444-4fbf-8a1c-97bd7c654075" podNamespace="kube-system" podName="kube-proxy-jsvnn"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.887414    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/99e468ec-c444-4fbf-8a1c-97bd7c654075-kube-proxy\") pod \"kube-proxy-jsvnn\" (UID: \"99e468ec-c444-4fbf-8a1c-97bd7c654075\") " pod="kube-system/kube-proxy-jsvnn"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.887462    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99e468ec-c444-4fbf-8a1c-97bd7c654075-xtables-lock\") pod \"kube-proxy-jsvnn\" (UID: \"99e468ec-c444-4fbf-8a1c-97bd7c654075\") " pod="kube-system/kube-proxy-jsvnn"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.887485    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99e468ec-c444-4fbf-8a1c-97bd7c654075-lib-modules\") pod \"kube-proxy-jsvnn\" (UID: \"99e468ec-c444-4fbf-8a1c-97bd7c654075\") " pod="kube-system/kube-proxy-jsvnn"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.887516    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk5nc\" (UniqueName: \"kubernetes.io/projected/99e468ec-c444-4fbf-8a1c-97bd7c654075-kube-api-access-qk5nc\") pod \"kube-proxy-jsvnn\" (UID: \"99e468ec-c444-4fbf-8a1c-97bd7c654075\") " pod="kube-system/kube-proxy-jsvnn"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.892189    1384 topology_manager.go:215] "Topology Admit Handler" podUID="8217bced-7146-429e-a09d-edf6e3891335" podNamespace="kube-system" podName="kindnet-drz9j"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.987963    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8217bced-7146-429e-a09d-edf6e3891335-lib-modules\") pod \"kindnet-drz9j\" (UID: \"8217bced-7146-429e-a09d-edf6e3891335\") " pod="kube-system/kindnet-drz9j"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.988022    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8217bced-7146-429e-a09d-edf6e3891335-cni-cfg\") pod \"kindnet-drz9j\" (UID: \"8217bced-7146-429e-a09d-edf6e3891335\") " pod="kube-system/kindnet-drz9j"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.988087    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwn4g\" (UniqueName: \"kubernetes.io/projected/8217bced-7146-429e-a09d-edf6e3891335-kube-api-access-fwn4g\") pod \"kindnet-drz9j\" (UID: \"8217bced-7146-429e-a09d-edf6e3891335\") " pod="kube-system/kindnet-drz9j"
	Oct 24 20:01:41 multinode-773966 kubelet[1384]: I1024 20:01:41.988129    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8217bced-7146-429e-a09d-edf6e3891335-xtables-lock\") pod \"kindnet-drz9j\" (UID: \"8217bced-7146-429e-a09d-edf6e3891335\") " pod="kube-system/kindnet-drz9j"
	Oct 24 20:01:42 multinode-773966 kubelet[1384]: W1024 20:01:42.539891    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/crio-c5cd5fe86b1b04c62b3b8285fab3f5af8a75cf9debeff909eacc3b26f4b651a3 WatchSource:0}: Error finding container c5cd5fe86b1b04c62b3b8285fab3f5af8a75cf9debeff909eacc3b26f4b651a3: Status 404 returned error can't find the container with id c5cd5fe86b1b04c62b3b8285fab3f5af8a75cf9debeff909eacc3b26f4b651a3
	Oct 24 20:01:43 multinode-773966 kubelet[1384]: I1024 20:01:43.336348    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-drz9j" podStartSLOduration=2.336302832 podCreationTimestamp="2023-10-24 20:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 20:01:43.322684137 +0000 UTC m=+14.281313676" watchObservedRunningTime="2023-10-24 20:01:43.336302832 +0000 UTC m=+14.294932371"
	Oct 24 20:01:49 multinode-773966 kubelet[1384]: I1024 20:01:49.209289    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jsvnn" podStartSLOduration=8.209243037 podCreationTimestamp="2023-10-24 20:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 20:01:43.33765467 +0000 UTC m=+14.296284218" watchObservedRunningTime="2023-10-24 20:01:49.209243037 +0000 UTC m=+20.167872576"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.553595    1384 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.621237    1384 topology_manager.go:215] "Topology Admit Handler" podUID="c3ba8ac1-f91f-4620-a22c-cd8946cd3a43" podNamespace="kube-system" podName="coredns-5dd5756b68-xxljp"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.625233    1384 topology_manager.go:215] "Topology Admit Handler" podUID="c67e72f3-94d0-4f1a-9a78-a7e5d344adae" podNamespace="kube-system" podName="storage-provisioner"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.724570    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c67e72f3-94d0-4f1a-9a78-a7e5d344adae-tmp\") pod \"storage-provisioner\" (UID: \"c67e72f3-94d0-4f1a-9a78-a7e5d344adae\") " pod="kube-system/storage-provisioner"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.724631    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrqr\" (UniqueName: \"kubernetes.io/projected/c3ba8ac1-f91f-4620-a22c-cd8946cd3a43-kube-api-access-4lrqr\") pod \"coredns-5dd5756b68-xxljp\" (UID: \"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43\") " pod="kube-system/coredns-5dd5756b68-xxljp"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.724660    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3ba8ac1-f91f-4620-a22c-cd8946cd3a43-config-volume\") pod \"coredns-5dd5756b68-xxljp\" (UID: \"c3ba8ac1-f91f-4620-a22c-cd8946cd3a43\") " pod="kube-system/coredns-5dd5756b68-xxljp"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: I1024 20:02:13.724682    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv6v7\" (UniqueName: \"kubernetes.io/projected/c67e72f3-94d0-4f1a-9a78-a7e5d344adae-kube-api-access-kv6v7\") pod \"storage-provisioner\" (UID: \"c67e72f3-94d0-4f1a-9a78-a7e5d344adae\") " pod="kube-system/storage-provisioner"
	Oct 24 20:02:13 multinode-773966 kubelet[1384]: W1024 20:02:13.981184    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/crio-7ddc5e80dd61edbfd8caeaeb4517baeb538aba1ae5a42f12df3ed8397e1ebb70 WatchSource:0}: Error finding container 7ddc5e80dd61edbfd8caeaeb4517baeb538aba1ae5a42f12df3ed8397e1ebb70: Status 404 returned error can't find the container with id 7ddc5e80dd61edbfd8caeaeb4517baeb538aba1ae5a42f12df3ed8397e1ebb70
	Oct 24 20:02:14 multinode-773966 kubelet[1384]: I1024 20:02:14.394200    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.394158633 podCreationTimestamp="2023-10-24 20:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 20:02:14.375390732 +0000 UTC m=+45.334020263" watchObservedRunningTime="2023-10-24 20:02:14.394158633 +0000 UTC m=+45.352788164"
	Oct 24 20:03:06 multinode-773966 kubelet[1384]: I1024 20:03:06.822368    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xxljp" podStartSLOduration=84.822329683 podCreationTimestamp="2023-10-24 20:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-24 20:02:14.394751196 +0000 UTC m=+45.353380752" watchObservedRunningTime="2023-10-24 20:03:06.822329683 +0000 UTC m=+97.780959222"
	Oct 24 20:03:06 multinode-773966 kubelet[1384]: I1024 20:03:06.822504    1384 topology_manager.go:215] "Topology Admit Handler" podUID="bd8f56c1-c8c3-48f0-b541-38d8ebf577e0" podNamespace="default" podName="busybox-5bc68d56bd-c622k"
	Oct 24 20:03:06 multinode-773966 kubelet[1384]: I1024 20:03:06.852523    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxj7h\" (UniqueName: \"kubernetes.io/projected/bd8f56c1-c8c3-48f0-b541-38d8ebf577e0-kube-api-access-vxj7h\") pod \"busybox-5bc68d56bd-c622k\" (UID: \"bd8f56c1-c8c3-48f0-b541-38d8ebf577e0\") " pod="default/busybox-5bc68d56bd-c622k"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-773966 -n multinode-773966
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-773966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.38s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3960051652.exe start -p running-upgrade-038195 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1024 20:18:00.663516 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3960051652.exe start -p running-upgrade-038195 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.142830062s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-038195 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-038195 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.758049036s)

                                                
                                                
-- stdout --
	* [running-upgrade-038195] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-038195 in cluster running-upgrade-038195
	* Pulling base image ...
	* Updating the running docker "running-upgrade-038195" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:18:53.729134 1241594 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:18:53.729382 1241594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:18:53.729410 1241594 out.go:309] Setting ErrFile to fd 2...
	I1024 20:18:53.729432 1241594 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:18:53.729773 1241594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:18:53.730328 1241594 out.go:303] Setting JSON to false
	I1024 20:18:53.731527 1241594 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36083,"bootTime":1698142651,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 20:18:53.731630 1241594 start.go:138] virtualization:  
	I1024 20:18:53.734574 1241594 out.go:177] * [running-upgrade-038195] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 20:18:53.736595 1241594 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:18:53.736692 1241594 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1024 20:18:53.736737 1241594 notify.go:220] Checking for updates...
	I1024 20:18:53.749147 1241594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:18:53.751091 1241594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:18:53.753308 1241594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 20:18:53.755357 1241594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 20:18:53.757413 1241594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:18:53.760826 1241594 config.go:182] Loaded profile config "running-upgrade-038195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:18:53.766844 1241594 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:18:53.769041 1241594 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:18:53.818064 1241594 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 20:18:53.818173 1241594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:18:53.925163 1241594 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1024 20:18:53.938255 1241594 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-24 20:18:53.92675109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:18:53.938360 1241594 docker.go:295] overlay module found
	I1024 20:18:53.941647 1241594 out.go:177] * Using the docker driver based on existing profile
	I1024 20:18:53.944923 1241594 start.go:298] selected driver: docker
	I1024 20:18:53.944943 1241594 start.go:902] validating driver "docker" against &{Name:running-upgrade-038195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-038195 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.52 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:18:53.945040 1241594 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:18:53.945663 1241594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:18:54.020240 1241594 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-10-24 20:18:54.00840345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:18:54.020607 1241594 cni.go:84] Creating CNI manager for ""
	I1024 20:18:54.020623 1241594 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 20:18:54.020634 1241594 start_flags.go:323] config:
	{Name:running-upgrade-038195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-038195 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.52 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:18:54.023006 1241594 out.go:177] * Starting control plane node running-upgrade-038195 in cluster running-upgrade-038195
	I1024 20:18:54.025266 1241594 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 20:18:54.027096 1241594 out.go:177] * Pulling base image ...
	I1024 20:18:54.028974 1241594 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1024 20:18:54.029066 1241594 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1024 20:18:54.052045 1241594 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1024 20:18:54.052068 1241594 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1024 20:18:54.103542 1241594 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1024 20:18:54.103714 1241594 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/running-upgrade-038195/config.json ...
	I1024 20:18:54.103975 1241594 cache.go:195] Successfully downloaded all kic artifacts
	I1024 20:18:54.104027 1241594 start.go:365] acquiring machines lock for running-upgrade-038195: {Name:mkc5556556e4afe5d0722ac71d8793057fbf6f2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.104090 1241594 start.go:369] acquired machines lock for "running-upgrade-038195" in 34.018µs
	I1024 20:18:54.104108 1241594 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:18:54.104117 1241594 fix.go:54] fixHost starting: 
	I1024 20:18:54.104389 1241594 cli_runner.go:164] Run: docker container inspect running-upgrade-038195 --format={{.State.Status}}
	I1024 20:18:54.104688 1241594 cache.go:107] acquiring lock: {Name:mk86a42edbe5cf42a5e9b9e663fc0ed5c1abe176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.104766 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 20:18:54.104779 1241594 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 95.885µs
	I1024 20:18:54.104789 1241594 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 20:18:54.104799 1241594 cache.go:107] acquiring lock: {Name:mkcd9edfd9d42acc1960f5575a65ab2fd17d3349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.104835 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1024 20:18:54.104845 1241594 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 47.597µs
	I1024 20:18:54.104852 1241594 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1024 20:18:54.104866 1241594 cache.go:107] acquiring lock: {Name:mk386e305fc452ab37fa9598d9ed1735e45ad989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.104893 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1024 20:18:54.104899 1241594 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 33.698µs
	I1024 20:18:54.104909 1241594 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1024 20:18:54.104919 1241594 cache.go:107] acquiring lock: {Name:mkf8d87607e47b1752dab854516d4a5cc43c6fbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.104949 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1024 20:18:54.104960 1241594 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 41.773µs
	I1024 20:18:54.104967 1241594 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1024 20:18:54.104988 1241594 cache.go:107] acquiring lock: {Name:mkffc4ba0e858609be826598a474f4728612c093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.105022 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1024 20:18:54.105032 1241594 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 50.707µs
	I1024 20:18:54.105039 1241594 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1024 20:18:54.105048 1241594 cache.go:107] acquiring lock: {Name:mk1511b28308048d2c648338b4bda0ebac4238b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.105081 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1024 20:18:54.105091 1241594 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 42.699µs
	I1024 20:18:54.105100 1241594 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1024 20:18:54.105113 1241594 cache.go:107] acquiring lock: {Name:mk253abc197bf8cc0e29889985b2ac4be67efcb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.105146 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1024 20:18:54.105154 1241594 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 42.314µs
	I1024 20:18:54.105161 1241594 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1024 20:18:54.105174 1241594 cache.go:107] acquiring lock: {Name:mk12749433743920689878e1e69f0f68bf86cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:18:54.105201 1241594 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1024 20:18:54.105209 1241594 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 36.652µs
	I1024 20:18:54.105215 1241594 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1024 20:18:54.105222 1241594 cache.go:87] Successfully saved all images to host disk.
	I1024 20:18:54.124300 1241594 fix.go:102] recreateIfNeeded on running-upgrade-038195: state=Running err=<nil>
	W1024 20:18:54.124345 1241594 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:18:54.126789 1241594 out.go:177] * Updating the running docker "running-upgrade-038195" container ...
	I1024 20:18:54.128610 1241594 machine.go:88] provisioning docker machine ...
	I1024 20:18:54.128637 1241594 ubuntu.go:169] provisioning hostname "running-upgrade-038195"
	I1024 20:18:54.128707 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:54.150783 1241594 main.go:141] libmachine: Using SSH client type: native
	I1024 20:18:54.151229 1241594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34396 <nil> <nil>}
	I1024 20:18:54.151249 1241594 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-038195 && echo "running-upgrade-038195" | sudo tee /etc/hostname
	I1024 20:18:54.306351 1241594 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-038195
	
	I1024 20:18:54.306429 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:54.329268 1241594 main.go:141] libmachine: Using SSH client type: native
	I1024 20:18:54.329703 1241594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34396 <nil> <nil>}
	I1024 20:18:54.329720 1241594 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-038195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-038195/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-038195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:18:54.470876 1241594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:18:54.470898 1241594 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:18:54.470927 1241594 ubuntu.go:177] setting up certificates
	I1024 20:18:54.470937 1241594 provision.go:83] configureAuth start
	I1024 20:18:54.470998 1241594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-038195
	I1024 20:18:54.490447 1241594 provision.go:138] copyHostCerts
	I1024 20:18:54.490511 1241594 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:18:54.490524 1241594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:18:54.490597 1241594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:18:54.490735 1241594 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:18:54.490745 1241594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:18:54.490772 1241594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:18:54.490877 1241594 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:18:54.490888 1241594 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:18:54.490914 1241594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:18:54.490962 1241594 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-038195 san=[192.168.70.52 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-038195]
	I1024 20:18:55.059236 1241594 provision.go:172] copyRemoteCerts
	I1024 20:18:55.059313 1241594 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:18:55.059359 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:55.080773 1241594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34396 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/running-upgrade-038195/id_rsa Username:docker}
	I1024 20:18:55.185102 1241594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:18:55.212770 1241594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:18:55.236934 1241594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:18:55.262553 1241594 provision.go:86] duration metric: configureAuth took 791.60205ms
	I1024 20:18:55.262583 1241594 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:18:55.262779 1241594 config.go:182] Loaded profile config "running-upgrade-038195": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:18:55.262890 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:55.282427 1241594 main.go:141] libmachine: Using SSH client type: native
	I1024 20:18:55.282924 1241594 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34396 <nil> <nil>}
	I1024 20:18:55.282945 1241594 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:18:55.909874 1241594 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:18:55.909901 1241594 machine.go:91] provisioned docker machine in 1.781275555s
	I1024 20:18:55.909913 1241594 start.go:300] post-start starting for "running-upgrade-038195" (driver="docker")
	I1024 20:18:55.909924 1241594 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:18:55.910001 1241594 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:18:55.910052 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:55.938072 1241594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34396 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/running-upgrade-038195/id_rsa Username:docker}
	I1024 20:18:56.039193 1241594 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:18:56.043313 1241594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:18:56.043342 1241594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:18:56.043354 1241594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:18:56.043362 1241594 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1024 20:18:56.043373 1241594 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:18:56.043450 1241594 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:18:56.043542 1241594 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:18:56.043686 1241594 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:18:56.054150 1241594 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:18:56.078597 1241594 start.go:303] post-start completed in 168.667345ms
	I1024 20:18:56.078696 1241594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:18:56.078740 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:56.108730 1241594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34396 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/running-upgrade-038195/id_rsa Username:docker}
	I1024 20:18:56.209872 1241594 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:18:56.215983 1241594 fix.go:56] fixHost completed within 2.111857184s
	I1024 20:18:56.216011 1241594 start.go:83] releasing machines lock for "running-upgrade-038195", held for 2.111907841s
	I1024 20:18:56.216086 1241594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-038195
	I1024 20:18:56.238756 1241594 ssh_runner.go:195] Run: cat /version.json
	I1024 20:18:56.238789 1241594 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:18:56.238809 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:56.238846 1241594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-038195
	I1024 20:18:56.260998 1241594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34396 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/running-upgrade-038195/id_rsa Username:docker}
	I1024 20:18:56.262420 1241594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34396 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/running-upgrade-038195/id_rsa Username:docker}
	W1024 20:18:56.466749 1241594 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 20:18:56.466840 1241594 ssh_runner.go:195] Run: systemctl --version
	I1024 20:18:56.472374 1241594 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:18:56.631084 1241594 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 20:18:56.637146 1241594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:18:56.661126 1241594 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 20:18:56.661224 1241594 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:18:56.691564 1241594 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:18:56.691589 1241594 start.go:472] detecting cgroup driver to use...
	I1024 20:18:56.691620 1241594 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 20:18:56.691669 1241594 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:18:56.721367 1241594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:18:56.733583 1241594 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:18:56.733657 1241594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:18:56.748911 1241594 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:18:56.762403 1241594 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 20:18:56.781355 1241594 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 20:18:56.781433 1241594 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:18:56.978191 1241594 docker.go:214] disabling docker service ...
	I1024 20:18:56.978276 1241594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:18:57.001332 1241594 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:18:57.018412 1241594 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:18:57.183918 1241594 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:18:57.334029 1241594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:18:57.346986 1241594 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:18:57.367178 1241594 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 20:18:57.367254 1241594 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:18:57.381448 1241594 out.go:177] 
	W1024 20:18:57.383468 1241594 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 20:18:57.383498 1241594 out.go:239] * 
	* 
	W1024 20:18:57.384512 1241594 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:18:57.387493 1241594 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-038195 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-24 20:18:57.413846069 +0000 UTC m=+3328.456883365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-038195
helpers_test.go:235: (dbg) docker inspect running-upgrade-038195:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c",
	        "Created": "2023-10-24T20:18:04.870116709Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1237856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T20:18:05.322206211Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c/hosts",
	        "LogPath": "/var/lib/docker/containers/e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c/e138b57fe0853bcdfb027b300677ebf56f6fe8c1fdc8fd7f32b88507eb737a0c-json.log",
	        "Name": "/running-upgrade-038195",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-038195:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-038195",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ea35f00554b3e2c650d0a37ef4ed659c16396b246637991b1d001e33a0673aa0-init/diff:/var/lib/docker/overlay2/93bf2a7ad283ff15be13e9e604464b25fd0e7c43b0703a84fefcf9e196b10bd1/diff:/var/lib/docker/overlay2/739a774bb24137a7dd31d311f688d51e8d004c000d5fc7fe9245d1d176a62a93/diff:/var/lib/docker/overlay2/daaa352b0140656a2bec626b24aa01074381ed532c21b72f9551708a341f6c08/diff:/var/lib/docker/overlay2/614cc3e4bddadcb04011a771e56399d0caf28a84b1e93253f9b5e47e25c522c9/diff:/var/lib/docker/overlay2/5ad267b0135e016d5e03e2f45bc104985992a7c487b85df302ef4e87d5400f59/diff:/var/lib/docker/overlay2/84b20d595909357a75fc0200aa85543934f9d47cd767b6e424c72ffdb534bcee/diff:/var/lib/docker/overlay2/e6115e72478eb4b3986d7c63f7ab8bbdb6cd558dad4c782e7093389a49b2de92/diff:/var/lib/docker/overlay2/938fe23b49c1aa122a2160539bd3b267da672368025f41379b47b5b0004e6252/diff:/var/lib/docker/overlay2/bab885e59c09499c5cbfad66e306c6a8efd88cdf9b62ed0a3980c4cd66058ebe/diff:/var/lib/docker/overlay2/ebad1f
6c3e51bdc3b033a48058058f65fc61e5956d152eee3149712d00f824a7/diff:/var/lib/docker/overlay2/ab03ef2aa94a937ce43dddc631a1e519c0b224fd4ef64485de0ca1bca7edc934/diff:/var/lib/docker/overlay2/ad9f5612e6c5838d308f3acd7864812438b5ae6edf57c7d9eabb4f9e0df0e880/diff:/var/lib/docker/overlay2/1bc765bac54c68c1af8f6692ea22175af9f08d684f0271a0d71060c54deee584/diff:/var/lib/docker/overlay2/b293753d0e89f344a2757613132317e2d8448e2cf25dec4f3d042c8cd762cfac/diff:/var/lib/docker/overlay2/2d13abe961fa1d64eb35cf0c2712a0a9f1e33e6774d9850a847cc3e3677d7f70/diff:/var/lib/docker/overlay2/246e667e1f041e11feabafedb0d06cd37eccf60090da913eb36cb906841e884a/diff:/var/lib/docker/overlay2/6581c6ad753fc6d99da6a1c6cd1be623dfd7bd032596fad52f0996de541e65bc/diff:/var/lib/docker/overlay2/a8045dd1b98f7cbb066c595ad136ad77b85a03ed724425268b4538accf3841ae/diff:/var/lib/docker/overlay2/6c0779eb0c6dbfd00d643c005bf7300b5da5fd0490fd272a939c2b45a1ae4df7/diff:/var/lib/docker/overlay2/4292c9217394506afe3a59897a0b4f8cd8ccbe67da2933b40cd37bf220481968/diff:/var/lib/d
ocker/overlay2/bd1ae79457c06840815c93c3701d084fc5d0573c370d4bc0e812d2b131545a8a/diff:/var/lib/docker/overlay2/91b6e6ab2471ab02dde1e15134653f68270fc3f827f3e4f71cebbae694656ee0/diff:/var/lib/docker/overlay2/09061f5949a34fbc19f50900b7ae98e354569e2693f79e6fc0d4ee7b3424cd49/diff:/var/lib/docker/overlay2/9563acb369535e3779b6b7a9f4d2e7e333ea5afcb727607bdd48533280f3a1e1/diff:/var/lib/docker/overlay2/34b11453ca1981e7181399c38bfe2960f93973270a3640c985de4cd4830689db/diff:/var/lib/docker/overlay2/a51d148a577af163bdb09f42f55377d8ad1105a87f48831825c8fcaaa1ca05c9/diff:/var/lib/docker/overlay2/c1e128ddd88828b282eacfe5b3d836193b7cb492afa0a3dfdeff344aabc53916/diff:/var/lib/docker/overlay2/3c4c35bf290559115d0b050cea3f0509d045de2606d1436f71c691780ffad0ad/diff:/var/lib/docker/overlay2/64a9a5ec352532c48112a1cdf602b3f80aaf7d05cde42575899a1c6a77c84034/diff:/var/lib/docker/overlay2/82da7338a019ba033b46b2cd4d336a96197bed1466b53ac3eddfe7a0284f2e46/diff:/var/lib/docker/overlay2/fe7f398c43faa3c63f355b373fc55c0ac47998a981195e22632bed6aed1
b286c/diff:/var/lib/docker/overlay2/8c80684e73ebadc515540be478b6bdb9362ca08754c7ba9b0c826ecef5129745/diff:/var/lib/docker/overlay2/a02e0c403e89263cd0ecc8b97cc6d0edfc81c4e450f0446e49fd57c21067419a/diff:/var/lib/docker/overlay2/d34527649c0d44d51314af0b5a9d23b03b92ccccedc66224ab5fdaf4e2085924/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ea35f00554b3e2c650d0a37ef4ed659c16396b246637991b1d001e33a0673aa0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ea35f00554b3e2c650d0a37ef4ed659c16396b246637991b1d001e33a0673aa0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ea35f00554b3e2c650d0a37ef4ed659c16396b246637991b1d001e33a0673aa0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-038195",
	                "Source": "/var/lib/docker/volumes/running-upgrade-038195/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-038195",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-038195",
	                "name.minikube.sigs.k8s.io": "running-upgrade-038195",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8098ee97d0674854318b413d9124bf358653b2a4f4cb0321f4c8701e442e2801",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34396"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34395"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34394"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34393"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8098ee97d067",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-038195": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.52"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e138b57fe085",
	                        "running-upgrade-038195"
	                    ],
	                    "NetworkID": "49521fe0829bf257f7859e974a5da3692ae16d3032b538eaccc06433b35a2b25",
	                    "EndpointID": "0042b33d8ad7c68e27d23894127df587c796e1738066580344219a60b40cf491",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.52",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:34",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-038195 -n running-upgrade-038195
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-038195 -n running-upgrade-038195: exit status 4 (538.055775ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:18:57.847362 1242174 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-038195" does not appear in /home/jenkins/minikube-integration/17485-1112248/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-038195" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-038195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-038195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-038195: (2.805992206s)
--- FAIL: TestRunningBinaryUpgrade (77.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (178.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.1467697763.exe start -p missing-upgrade-183191 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.1467697763.exe start -p missing-upgrade-183191 --memory=2200 --driver=docker  --container-runtime=crio: (2m8.59583416s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-183191
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-183191: (10.373227872s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-183191
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-183191 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-183191 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (36.342872697s)

                                                
                                                
-- stdout --
	* [missing-upgrade-183191] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-183191 in cluster missing-upgrade-183191
	* Pulling base image ...
	* docker "missing-upgrade-183191" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:15:33.330014 1228184 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:15:33.330991 1228184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:15:33.331035 1228184 out.go:309] Setting ErrFile to fd 2...
	I1024 20:15:33.331055 1228184 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:15:33.331368 1228184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:15:33.331787 1228184 out.go:303] Setting JSON to false
	I1024 20:15:33.334621 1228184 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":35883,"bootTime":1698142651,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 20:15:33.334735 1228184 start.go:138] virtualization:  
	I1024 20:15:33.337452 1228184 out.go:177] * [missing-upgrade-183191] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 20:15:33.339546 1228184 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:15:33.339630 1228184 notify.go:220] Checking for updates...
	I1024 20:15:33.341684 1228184 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:15:33.345837 1228184 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:15:33.347686 1228184 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 20:15:33.349860 1228184 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 20:15:33.351718 1228184 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:15:33.354326 1228184 config.go:182] Loaded profile config "missing-upgrade-183191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:15:33.356992 1228184 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:15:33.358752 1228184 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:15:33.419702 1228184 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 20:15:33.419839 1228184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:15:33.555610 1228184 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 20:15:33.544477165 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:15:33.555711 1228184 docker.go:295] overlay module found
	I1024 20:15:33.558106 1228184 out.go:177] * Using the docker driver based on existing profile
	I1024 20:15:33.560243 1228184 start.go:298] selected driver: docker
	I1024 20:15:33.560281 1228184 start.go:902] validating driver "docker" against &{Name:missing-upgrade-183191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-183191 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.152 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:15:33.560370 1228184 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:15:33.561003 1228184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:15:33.657175 1228184 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-24 20:15:33.644748858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:15:33.657467 1228184 cni.go:84] Creating CNI manager for ""
	I1024 20:15:33.657478 1228184 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 20:15:33.657490 1228184 start_flags.go:323] config:
	{Name:missing-upgrade-183191 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-183191 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.152 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:15:33.660645 1228184 out.go:177] * Starting control plane node missing-upgrade-183191 in cluster missing-upgrade-183191
	I1024 20:15:33.663326 1228184 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 20:15:33.665143 1228184 out.go:177] * Pulling base image ...
	I1024 20:15:33.667247 1228184 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1024 20:15:33.667423 1228184 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1024 20:15:33.718020 1228184 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1024 20:15:33.718203 1228184 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1024 20:15:33.718721 1228184 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1024 20:15:33.782877 1228184 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1024 20:15:33.783030 1228184 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/missing-upgrade-183191/config.json ...
	I1024 20:15:33.783422 1228184 cache.go:107] acquiring lock: {Name:mk86a42edbe5cf42a5e9b9e663fc0ed5c1abe176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.783496 1228184 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 20:15:33.783505 1228184 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.33µs
	I1024 20:15:33.783526 1228184 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 20:15:33.783539 1228184 cache.go:107] acquiring lock: {Name:mkcd9edfd9d42acc1960f5575a65ab2fd17d3349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.783615 1228184 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1024 20:15:33.783771 1228184 cache.go:107] acquiring lock: {Name:mk386e305fc452ab37fa9598d9ed1735e45ad989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.783869 1228184 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1024 20:15:33.783955 1228184 cache.go:107] acquiring lock: {Name:mkf8d87607e47b1752dab854516d4a5cc43c6fbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.784019 1228184 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1024 20:15:33.784085 1228184 cache.go:107] acquiring lock: {Name:mkffc4ba0e858609be826598a474f4728612c093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.784143 1228184 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1024 20:15:33.784204 1228184 cache.go:107] acquiring lock: {Name:mk1511b28308048d2c648338b4bda0ebac4238b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.784256 1228184 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1024 20:15:33.784323 1228184 cache.go:107] acquiring lock: {Name:mk253abc197bf8cc0e29889985b2ac4be67efcb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.784374 1228184 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1024 20:15:33.784447 1228184 cache.go:107] acquiring lock: {Name:mk12749433743920689878e1e69f0f68bf86cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:33.784498 1228184 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1024 20:15:33.789259 1228184 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1024 20:15:33.789869 1228184 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1024 20:15:33.790826 1228184 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1024 20:15:33.790984 1228184 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1024 20:15:33.791835 1228184 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1024 20:15:33.792038 1228184 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1024 20:15:33.792571 1228184 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	W1024 20:15:34.128969 1228184 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1024 20:15:34.129024 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1024 20:15:34.146725 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I1024 20:15:34.153060 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1024 20:15:34.156542 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1024 20:15:34.162993 1228184 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1024 20:15:34.163063 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1024 20:15:34.193572 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W1024 20:15:34.194879 1228184 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1024 20:15:34.194943 1228184 cache.go:162] opening:  /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I1024 20:15:34.288397 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1024 20:15:34.288465 1228184 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 504.258818ms
	I1024 20:15:34.288494 1228184 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  1.44 MiB / 287.99 MiB [>_] 0.50% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  16.42 MiB / 287.99 MiB [>] 5.70% ? p/s ?I1024 20:15:34.733512 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1024 20:15:34.733544 1228184 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 949.09706ms
	I1024 20:15:34.736889 1228184 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.21 MiB I1024 20:15:34.997887 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1024 20:15:34.997909 1228184 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.213954261s
	I1024 20:15:34.997922 1228184 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.21 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 43.21 MiB     > gcr.io/k8s-minikube/kicbase...:  32.52 MiB / 287.99 MiB  11.29% 41.12 MiBI1024 20:15:35.547208 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1024 20:15:35.549851 1228184 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.766302022s
	I1024 20:15:35.549881 1228184 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1024 20:15:35.608328 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1024 20:15:35.608357 1228184 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.824586332s
	I1024 20:15:35.608376 1228184 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 41.12 MiB    > gcr.io/k8s-minikube/kicbase...:  60.17 MiB / 287.99 MiB  20.89% 41.12 MiBI1024 20:15:36.073315 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1024 20:15:36.073351 1228184 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.289261704s
	I1024 20:15:36.073371 1228184 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 42.30 MiB    > gcr.io/k8s-minikube/kicbase...:  85.11 MiB / 287.99 MiB  29.55% 42.30 MiB    > gcr.io/k8s-minikube/kicbase...:  106.37 MiB / 287.99 MiB  36.94% 42.30 Mi    > gcr.io/k8s-minikube/kicbase...:  128.67 MiB / 287.99 MiB  44.68% 46.12 Mi    > gcr.io/k8s-minikube/kicbase...:  149.03 MiB / 287.99 MiB  51.75% 46.12 Mi    > gcr.io/k8s-minikube/kicbase...:  168.14 MiB / 287.99 MiB  58.38% 46.12 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 47.77 MiI1024 20:15:37.458239 1228184 cache.go:157] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1024 20:15:37.458574 1228184 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 3.674251608s
	I1024 20:15:37.458619 1228184 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1024 20:15:37.458656 1228184 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  185.58 MiB / 287.99 MiB  64.44% 47.77 Mi    > gcr.io/k8s-minikube/kicbase...:  208.93 MiB / 287.99 MiB  72.55% 47.77 Mi    > gcr.io/k8s-minikube/kicbase...:  215.07 MiB / 287.99 MiB  74.68% 49.35 Mi    > gcr.io/k8s-minikube/kicbase...:  237.70 MiB / 287.99 MiB  82.54% 49.35 Mi    > gcr.io/k8s-minikube/kicbase...:  244.96 MiB / 287.99 MiB  85.06% 49.35 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 51.54 Mi    > gcr.io/k8s-minikube/kicbase...:  271.60 MiB / 287.99 MiB  94.31% 51.54 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 51.54 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 50.67 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 50.67 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 50.67 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 47.41 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100
.00% 47.41 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 47.41 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 48.14 MI1024 20:15:40.277080 1228184 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1024 20:15:40.277092 1228184 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1024 20:15:41.302002 1228184 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1024 20:15:41.302033 1228184 cache.go:195] Successfully downloaded all kic artifacts
	I1024 20:15:41.302084 1228184 start.go:365] acquiring machines lock for missing-upgrade-183191: {Name:mk66ecea6e5009ce9c9822bddb357bc94e1909ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:15:41.302246 1228184 start.go:369] acquired machines lock for "missing-upgrade-183191" in 114.576µs
	I1024 20:15:41.302278 1228184 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:15:41.302287 1228184 fix.go:54] fixHost starting: 
	I1024 20:15:41.302728 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:41.330310 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:41.330373 1228184 fix.go:102] recreateIfNeeded on missing-upgrade-183191: state= err=unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:41.330391 1228184 fix.go:107] machineExists: false. err=machine does not exist
	I1024 20:15:41.333340 1228184 out.go:177] * docker "missing-upgrade-183191" container is missing, will recreate.
	I1024 20:15:41.335645 1228184 delete.go:124] DEMOLISHING missing-upgrade-183191 ...
	I1024 20:15:41.336245 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:41.361677 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	W1024 20:15:41.361822 1228184 stop.go:75] unable to get state: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:41.361880 1228184 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:41.362478 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:41.383442 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:41.383503 1228184 delete.go:82] Unable to get host status for missing-upgrade-183191, assuming it has already been deleted: state: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:41.383564 1228184 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-183191
	W1024 20:15:41.405948 1228184 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-183191 returned with exit code 1
	I1024 20:15:41.405981 1228184 kic.go:368] could not find the container missing-upgrade-183191 to remove it. will try anyways
	I1024 20:15:41.406106 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:41.432974 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	W1024 20:15:41.433031 1228184 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:41.433161 1228184 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-183191 /bin/bash -c "sudo init 0"
	W1024 20:15:41.467223 1228184 cli_runner.go:211] docker exec --privileged -t missing-upgrade-183191 /bin/bash -c "sudo init 0" returned with exit code 1
	I1024 20:15:41.467269 1228184 oci.go:650] error shutdown missing-upgrade-183191: docker exec --privileged -t missing-upgrade-183191 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:42.467423 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:42.512273 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:42.512373 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:42.512394 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:42.512428 1228184 retry.go:31] will retry after 300.392154ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:42.813994 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:42.870436 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:42.870500 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:42.870525 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:42.870549 1228184 retry.go:31] will retry after 1.027362446s: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:43.898163 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:43.927999 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:43.928058 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:43.928069 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:43.928093 1228184 retry.go:31] will retry after 846.785481ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:44.775087 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:44.792797 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:44.792854 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:44.792865 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:44.792889 1228184 retry.go:31] will retry after 2.18332773s: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:46.977867 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:46.997604 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:46.997657 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:46.997666 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:46.997695 1228184 retry.go:31] will retry after 1.874249753s: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:48.872149 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:48.890252 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:48.890310 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:48.890327 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:48.890360 1228184 retry.go:31] will retry after 4.059755577s: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:52.950939 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:15:52.969785 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:15:52.969846 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:15:52.969857 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:15:52.969881 1228184 retry.go:31] will retry after 7.438890765s: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:16:00.410781 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:16:00.432248 1228184 cli_runner.go:211] docker container inspect missing-upgrade-183191 --format={{.State.Status}} returned with exit code 1
	I1024 20:16:00.432312 1228184 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	I1024 20:16:00.432324 1228184 oci.go:664] temporary error: container missing-upgrade-183191 status is  but expect it to be exited
	I1024 20:16:00.432356 1228184 oci.go:88] couldn't shut down missing-upgrade-183191 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-183191": docker container inspect missing-upgrade-183191 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-183191
	 
	I1024 20:16:00.432439 1228184 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-183191
	I1024 20:16:00.450272 1228184 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-183191
	W1024 20:16:00.467634 1228184 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-183191 returned with exit code 1
	I1024 20:16:00.467728 1228184 cli_runner.go:164] Run: docker network inspect missing-upgrade-183191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:16:00.485461 1228184 cli_runner.go:164] Run: docker network rm missing-upgrade-183191
	I1024 20:16:00.590289 1228184 fix.go:114] Sleeping 1 second for extra luck!
	I1024 20:16:01.591350 1228184 start.go:125] createHost starting for "" (driver="docker")
	I1024 20:16:01.595921 1228184 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1024 20:16:01.596098 1228184 start.go:159] libmachine.API.Create for "missing-upgrade-183191" (driver="docker")
	I1024 20:16:01.596126 1228184 client.go:168] LocalClient.Create starting
	I1024 20:16:01.596197 1228184 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem
	I1024 20:16:01.596238 1228184 main.go:141] libmachine: Decoding PEM data...
	I1024 20:16:01.596258 1228184 main.go:141] libmachine: Parsing certificate...
	I1024 20:16:01.596331 1228184 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem
	I1024 20:16:01.596355 1228184 main.go:141] libmachine: Decoding PEM data...
	I1024 20:16:01.596367 1228184 main.go:141] libmachine: Parsing certificate...
	I1024 20:16:01.596639 1228184 cli_runner.go:164] Run: docker network inspect missing-upgrade-183191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1024 20:16:01.618650 1228184 cli_runner.go:211] docker network inspect missing-upgrade-183191 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1024 20:16:01.618734 1228184 network_create.go:281] running [docker network inspect missing-upgrade-183191] to gather additional debugging logs...
	I1024 20:16:01.618756 1228184 cli_runner.go:164] Run: docker network inspect missing-upgrade-183191
	W1024 20:16:01.640512 1228184 cli_runner.go:211] docker network inspect missing-upgrade-183191 returned with exit code 1
	I1024 20:16:01.640543 1228184 network_create.go:284] error running [docker network inspect missing-upgrade-183191]: docker network inspect missing-upgrade-183191: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-183191 not found
	I1024 20:16:01.640556 1228184 network_create.go:286] output of [docker network inspect missing-upgrade-183191]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-183191 not found
	
	** /stderr **
	I1024 20:16:01.640670 1228184 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1024 20:16:01.658515 1228184 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6e280ec74d15 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:2f:b4:6a} reservation:<nil>}
	I1024 20:16:01.658875 1228184 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-52df26ec37c4 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c9:68:00:f0} reservation:<nil>}
	I1024 20:16:01.659233 1228184 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bf8580727402 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:3d:f3:28:42} reservation:<nil>}
	I1024 20:16:01.659707 1228184 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400355bd50}
	I1024 20:16:01.659727 1228184 network_create.go:124] attempt to create docker network missing-upgrade-183191 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1024 20:16:01.659799 1228184 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-183191 missing-upgrade-183191
	I1024 20:16:01.734909 1228184 network_create.go:108] docker network missing-upgrade-183191 192.168.76.0/24 created
	I1024 20:16:01.734943 1228184 kic.go:118] calculated static IP "192.168.76.2" for the "missing-upgrade-183191" container
	I1024 20:16:01.735030 1228184 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1024 20:16:01.752717 1228184 cli_runner.go:164] Run: docker volume create missing-upgrade-183191 --label name.minikube.sigs.k8s.io=missing-upgrade-183191 --label created_by.minikube.sigs.k8s.io=true
	I1024 20:16:01.770193 1228184 oci.go:103] Successfully created a docker volume missing-upgrade-183191
	I1024 20:16:01.770288 1228184 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-183191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-183191 --entrypoint /usr/bin/test -v missing-upgrade-183191:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1024 20:16:03.621774 1228184 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-183191-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-183191 --entrypoint /usr/bin/test -v missing-upgrade-183191:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.851421573s)
	I1024 20:16:03.621805 1228184 oci.go:107] Successfully prepared a docker volume missing-upgrade-183191
	I1024 20:16:03.621826 1228184 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1024 20:16:03.621985 1228184 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1024 20:16:03.622098 1228184 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1024 20:16:03.701710 1228184 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-183191 --name missing-upgrade-183191 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-183191 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-183191 --network missing-upgrade-183191 --ip 192.168.76.2 --volume missing-upgrade-183191:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1024 20:16:04.117464 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Running}}
	I1024 20:16:04.151484 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	I1024 20:16:04.175573 1228184 cli_runner.go:164] Run: docker exec missing-upgrade-183191 stat /var/lib/dpkg/alternatives/iptables
	I1024 20:16:04.266080 1228184 oci.go:144] the created container "missing-upgrade-183191" has a running status.
	I1024 20:16:04.266112 1228184 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa...
	I1024 20:16:05.166916 1228184 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1024 20:16:05.194992 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	I1024 20:16:05.218764 1228184 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1024 20:16:05.218786 1228184 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-183191 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1024 20:16:05.286405 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	I1024 20:16:05.306604 1228184 machine.go:88] provisioning docker machine ...
	I1024 20:16:05.306635 1228184 ubuntu.go:169] provisioning hostname "missing-upgrade-183191"
	I1024 20:16:05.306710 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:05.333887 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:05.334318 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:05.334331 1228184 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-183191 && echo "missing-upgrade-183191" | sudo tee /etc/hostname
	I1024 20:16:05.492737 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-183191
	
	I1024 20:16:05.492816 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:05.520361 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:05.520776 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:05.520800 1228184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-183191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-183191/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-183191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:16:05.662639 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:16:05.662710 1228184 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:16:05.662746 1228184 ubuntu.go:177] setting up certificates
	I1024 20:16:05.662783 1228184 provision.go:83] configureAuth start
	I1024 20:16:05.662884 1228184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-183191
	I1024 20:16:05.681059 1228184 provision.go:138] copyHostCerts
	I1024 20:16:05.681122 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:16:05.681130 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:16:05.681213 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:16:05.681304 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:16:05.681309 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:16:05.681333 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:16:05.681383 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:16:05.681387 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:16:05.681409 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:16:05.681450 1228184 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-183191 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-183191]
	I1024 20:16:06.132752 1228184 provision.go:172] copyRemoteCerts
	I1024 20:16:06.132821 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:16:06.132868 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:06.153302 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:06.250954 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:16:06.273689 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:16:06.295999 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:16:06.318722 1228184 provision.go:86] duration metric: configureAuth took 655.90613ms
	I1024 20:16:06.318752 1228184 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:16:06.318931 1228184 config.go:182] Loaded profile config "missing-upgrade-183191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:16:06.319038 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:06.338046 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:06.338459 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:06.338479 1228184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:16:06.778505 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:16:06.778525 1228184 machine.go:91] provisioned docker machine in 1.471902812s
	I1024 20:16:06.778535 1228184 client.go:171] LocalClient.Create took 5.182402333s
	I1024 20:16:06.778555 1228184 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-183191" took 5.182458342s
	I1024 20:16:06.778563 1228184 start.go:300] post-start starting for "missing-upgrade-183191" (driver="docker")
	I1024 20:16:06.778572 1228184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:16:06.778639 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:16:06.778684 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:06.798311 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:06.899447 1228184 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:16:06.904827 1228184 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:16:06.904854 1228184 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:16:06.904865 1228184 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:16:06.904873 1228184 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1024 20:16:06.904883 1228184 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:16:06.904945 1228184 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:16:06.905030 1228184 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:16:06.905132 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:16:06.914093 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:16:06.936564 1228184 start.go:303] post-start completed in 157.985446ms
	I1024 20:16:06.936916 1228184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-183191
	I1024 20:16:06.956068 1228184 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/missing-upgrade-183191/config.json ...
	I1024 20:16:06.956347 1228184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:16:06.956395 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:06.974160 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:07.073199 1228184 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:16:07.078796 1228184 start.go:128] duration metric: createHost completed in 5.487407472s
	I1024 20:16:07.078884 1228184 cli_runner.go:164] Run: docker container inspect missing-upgrade-183191 --format={{.State.Status}}
	W1024 20:16:07.097166 1228184 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:16:07.097191 1228184 machine.go:88] provisioning docker machine ...
	I1024 20:16:07.097209 1228184 ubuntu.go:169] provisioning hostname "missing-upgrade-183191"
	I1024 20:16:07.097278 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:07.115768 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:07.116233 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:07.116248 1228184 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-183191 && echo "missing-upgrade-183191" | sudo tee /etc/hostname
	I1024 20:16:07.264607 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-183191
	
	I1024 20:16:07.264686 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:07.288490 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:07.288897 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:07.288920 1228184 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-183191' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-183191/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-183191' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:16:07.430787 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:16:07.430818 1228184 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:16:07.430844 1228184 ubuntu.go:177] setting up certificates
	I1024 20:16:07.430853 1228184 provision.go:83] configureAuth start
	I1024 20:16:07.430919 1228184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-183191
	I1024 20:16:07.450670 1228184 provision.go:138] copyHostCerts
	I1024 20:16:07.450732 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:16:07.450742 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:16:07.450815 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:16:07.450905 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:16:07.450916 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:16:07.450942 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:16:07.450998 1228184 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:16:07.451007 1228184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:16:07.451031 1228184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:16:07.451110 1228184 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-183191 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-183191]
	I1024 20:16:07.854148 1228184 provision.go:172] copyRemoteCerts
	I1024 20:16:07.854219 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:16:07.854260 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:07.873465 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:07.970652 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:16:07.992144 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:16:08.015681 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 20:16:08.038239 1228184 provision.go:86] duration metric: configureAuth took 607.371594ms
	I1024 20:16:08.038264 1228184 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:16:08.038445 1228184 config.go:182] Loaded profile config "missing-upgrade-183191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:16:08.038556 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:08.057650 1228184 main.go:141] libmachine: Using SSH client type: native
	I1024 20:16:08.058115 1228184 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34384 <nil> <nil>}
	I1024 20:16:08.058136 1228184 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:16:08.394264 1228184 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:16:08.394328 1228184 machine.go:91] provisioned docker machine in 1.297128132s
	I1024 20:16:08.394345 1228184 start.go:300] post-start starting for "missing-upgrade-183191" (driver="docker")
	I1024 20:16:08.394357 1228184 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:16:08.394422 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:16:08.394467 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:08.420397 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:08.518942 1228184 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:16:08.522708 1228184 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:16:08.522735 1228184 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:16:08.522747 1228184 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:16:08.522754 1228184 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1024 20:16:08.522767 1228184 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:16:08.522830 1228184 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:16:08.522911 1228184 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:16:08.523028 1228184 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:16:08.531679 1228184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:16:08.553293 1228184 start.go:303] post-start completed in 158.930737ms
	I1024 20:16:08.553402 1228184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:16:08.553466 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:08.572452 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:08.667398 1228184 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:16:08.672810 1228184 fix.go:56] fixHost completed within 27.37051738s
	I1024 20:16:08.672835 1228184 start.go:83] releasing machines lock for "missing-upgrade-183191", held for 27.370575144s
	I1024 20:16:08.672902 1228184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-183191
	I1024 20:16:08.691063 1228184 ssh_runner.go:195] Run: cat /version.json
	I1024 20:16:08.691099 1228184 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:16:08.691130 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:08.691174 1228184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-183191
	I1024 20:16:08.713284 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	I1024 20:16:08.713284 1228184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34384 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/missing-upgrade-183191/id_rsa Username:docker}
	W1024 20:16:08.918646 1228184 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 20:16:08.918733 1228184 ssh_runner.go:195] Run: systemctl --version
	I1024 20:16:08.923724 1228184 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:16:09.037398 1228184 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 20:16:09.043527 1228184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:16:09.064326 1228184 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 20:16:09.064409 1228184 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:16:09.096263 1228184 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:16:09.096289 1228184 start.go:472] detecting cgroup driver to use...
	I1024 20:16:09.096321 1228184 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 20:16:09.096377 1228184 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:16:09.126558 1228184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:16:09.138622 1228184 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:16:09.138699 1228184 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:16:09.151402 1228184 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:16:09.163385 1228184 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 20:16:09.177019 1228184 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 20:16:09.177097 1228184 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:16:09.277071 1228184 docker.go:214] disabling docker service ...
	I1024 20:16:09.277151 1228184 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:16:09.290620 1228184 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:16:09.302936 1228184 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:16:09.407631 1228184 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:16:09.517004 1228184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:16:09.528650 1228184 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:16:09.545838 1228184 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 20:16:09.545910 1228184 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:16:09.559203 1228184 out.go:177] 
	W1024 20:16:09.561254 1228184 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 20:16:09.561275 1228184 out.go:239] * 
	* 
	W1024 20:16:09.562207 1228184 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:16:09.564263 1228184 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-183191 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-24 20:16:09.609987527 +0000 UTC m=+3160.653024823
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-183191
helpers_test.go:235: (dbg) docker inspect missing-upgrade-183191:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d",
	        "Created": "2023-10-24T20:16:03.718459316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1229937,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-24T20:16:04.107096468Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d/hostname",
	        "HostsPath": "/var/lib/docker/containers/95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d/hosts",
	        "LogPath": "/var/lib/docker/containers/95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d/95fa514e2091adc92033ce99a3324af7dabea2ed5130b439b888e8646cc4014d-json.log",
	        "Name": "/missing-upgrade-183191",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-183191:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-183191",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/41405a3189452997816bcfc5ae8235453f1901a9ed456f004d07998feae3bdff-init/diff:/var/lib/docker/overlay2/93bf2a7ad283ff15be13e9e604464b25fd0e7c43b0703a84fefcf9e196b10bd1/diff:/var/lib/docker/overlay2/739a774bb24137a7dd31d311f688d51e8d004c000d5fc7fe9245d1d176a62a93/diff:/var/lib/docker/overlay2/daaa352b0140656a2bec626b24aa01074381ed532c21b72f9551708a341f6c08/diff:/var/lib/docker/overlay2/614cc3e4bddadcb04011a771e56399d0caf28a84b1e93253f9b5e47e25c522c9/diff:/var/lib/docker/overlay2/5ad267b0135e016d5e03e2f45bc104985992a7c487b85df302ef4e87d5400f59/diff:/var/lib/docker/overlay2/84b20d595909357a75fc0200aa85543934f9d47cd767b6e424c72ffdb534bcee/diff:/var/lib/docker/overlay2/e6115e72478eb4b3986d7c63f7ab8bbdb6cd558dad4c782e7093389a49b2de92/diff:/var/lib/docker/overlay2/938fe23b49c1aa122a2160539bd3b267da672368025f41379b47b5b0004e6252/diff:/var/lib/docker/overlay2/bab885e59c09499c5cbfad66e306c6a8efd88cdf9b62ed0a3980c4cd66058ebe/diff:/var/lib/docker/overlay2/ebad1f
6c3e51bdc3b033a48058058f65fc61e5956d152eee3149712d00f824a7/diff:/var/lib/docker/overlay2/ab03ef2aa94a937ce43dddc631a1e519c0b224fd4ef64485de0ca1bca7edc934/diff:/var/lib/docker/overlay2/ad9f5612e6c5838d308f3acd7864812438b5ae6edf57c7d9eabb4f9e0df0e880/diff:/var/lib/docker/overlay2/1bc765bac54c68c1af8f6692ea22175af9f08d684f0271a0d71060c54deee584/diff:/var/lib/docker/overlay2/b293753d0e89f344a2757613132317e2d8448e2cf25dec4f3d042c8cd762cfac/diff:/var/lib/docker/overlay2/2d13abe961fa1d64eb35cf0c2712a0a9f1e33e6774d9850a847cc3e3677d7f70/diff:/var/lib/docker/overlay2/246e667e1f041e11feabafedb0d06cd37eccf60090da913eb36cb906841e884a/diff:/var/lib/docker/overlay2/6581c6ad753fc6d99da6a1c6cd1be623dfd7bd032596fad52f0996de541e65bc/diff:/var/lib/docker/overlay2/a8045dd1b98f7cbb066c595ad136ad77b85a03ed724425268b4538accf3841ae/diff:/var/lib/docker/overlay2/6c0779eb0c6dbfd00d643c005bf7300b5da5fd0490fd272a939c2b45a1ae4df7/diff:/var/lib/docker/overlay2/4292c9217394506afe3a59897a0b4f8cd8ccbe67da2933b40cd37bf220481968/diff:/var/lib/d
ocker/overlay2/bd1ae79457c06840815c93c3701d084fc5d0573c370d4bc0e812d2b131545a8a/diff:/var/lib/docker/overlay2/91b6e6ab2471ab02dde1e15134653f68270fc3f827f3e4f71cebbae694656ee0/diff:/var/lib/docker/overlay2/09061f5949a34fbc19f50900b7ae98e354569e2693f79e6fc0d4ee7b3424cd49/diff:/var/lib/docker/overlay2/9563acb369535e3779b6b7a9f4d2e7e333ea5afcb727607bdd48533280f3a1e1/diff:/var/lib/docker/overlay2/34b11453ca1981e7181399c38bfe2960f93973270a3640c985de4cd4830689db/diff:/var/lib/docker/overlay2/a51d148a577af163bdb09f42f55377d8ad1105a87f48831825c8fcaaa1ca05c9/diff:/var/lib/docker/overlay2/c1e128ddd88828b282eacfe5b3d836193b7cb492afa0a3dfdeff344aabc53916/diff:/var/lib/docker/overlay2/3c4c35bf290559115d0b050cea3f0509d045de2606d1436f71c691780ffad0ad/diff:/var/lib/docker/overlay2/64a9a5ec352532c48112a1cdf602b3f80aaf7d05cde42575899a1c6a77c84034/diff:/var/lib/docker/overlay2/82da7338a019ba033b46b2cd4d336a96197bed1466b53ac3eddfe7a0284f2e46/diff:/var/lib/docker/overlay2/fe7f398c43faa3c63f355b373fc55c0ac47998a981195e22632bed6aed1
b286c/diff:/var/lib/docker/overlay2/8c80684e73ebadc515540be478b6bdb9362ca08754c7ba9b0c826ecef5129745/diff:/var/lib/docker/overlay2/a02e0c403e89263cd0ecc8b97cc6d0edfc81c4e450f0446e49fd57c21067419a/diff:/var/lib/docker/overlay2/d34527649c0d44d51314af0b5a9d23b03b92ccccedc66224ab5fdaf4e2085924/diff",
	                "MergedDir": "/var/lib/docker/overlay2/41405a3189452997816bcfc5ae8235453f1901a9ed456f004d07998feae3bdff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/41405a3189452997816bcfc5ae8235453f1901a9ed456f004d07998feae3bdff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/41405a3189452997816bcfc5ae8235453f1901a9ed456f004d07998feae3bdff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-183191",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-183191/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-183191",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-183191",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-183191",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "64e54b9cdf33834ac253bf6b3be9fcacb6070c5678ae1e538802dbfd061c9281",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34384"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34383"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34380"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34382"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34381"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/64e54b9cdf33",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-183191": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "95fa514e2091",
	                        "missing-upgrade-183191"
	                    ],
	                    "NetworkID": "873d9bbf6b6c4429b113f7676af7202dde7be01bb3ac988ea328e319b0ce3f70",
	                    "EndpointID": "21dcec3c0241c0aca32e7b49acae3f5ff3c9607f8243713dc710243a7a232f2d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-183191 -n missing-upgrade-183191
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-183191 -n missing-upgrade-183191: exit status 6 (334.636161ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:16:09.945975 1230937 status.go:415] kubeconfig endpoint: got: 192.168.59.152:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-183191" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-183191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-183191
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-183191: (1.926375836s)
--- FAIL: TestMissingContainerUpgrade (178.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.559795822.exe start -p stopped-upgrade-825028 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1024 20:16:37.739895 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.559795822.exe start -p stopped-upgrade-825028 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m0.581733762s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.559795822.exe -p stopped-upgrade-825028 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.559795822.exe -p stopped-upgrade-825028 stop: (20.394862601s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-825028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-825028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.510425524s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-825028] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-825028 in cluster stopped-upgrade-825028
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-825028" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:17:33.948095 1235603 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:17:33.948294 1235603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.948320 1235603 out.go:309] Setting ErrFile to fd 2...
	I1024 20:17:33.948338 1235603 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:17:33.948612 1235603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:17:33.948989 1235603 out.go:303] Setting JSON to false
	I1024 20:17:33.949957 1235603 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36003,"bootTime":1698142651,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 20:17:33.950051 1235603 start.go:138] virtualization:  
	I1024 20:17:33.952447 1235603 out.go:177] * [stopped-upgrade-825028] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 20:17:33.955083 1235603 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:17:33.957360 1235603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:17:33.955206 1235603 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1024 20:17:33.955248 1235603 notify.go:220] Checking for updates...
	I1024 20:17:33.959599 1235603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:17:33.961791 1235603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 20:17:33.963753 1235603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 20:17:33.967700 1235603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:17:33.970160 1235603 config.go:182] Loaded profile config "stopped-upgrade-825028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:17:33.973091 1235603 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 20:17:33.975471 1235603 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:17:34.020110 1235603 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 20:17:34.020222 1235603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:17:34.109683 1235603 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1024 20:17:34.118339 1235603 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-24 20:17:34.107783273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:17:34.118446 1235603 docker.go:295] overlay module found
	I1024 20:17:34.121566 1235603 out.go:177] * Using the docker driver based on existing profile
	I1024 20:17:34.123671 1235603 start.go:298] selected driver: docker
	I1024 20:17:34.123686 1235603 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-825028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-825028 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.197 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:17:34.123780 1235603 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:17:34.124378 1235603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:17:34.192923 1235603 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-24 20:17:34.183687262 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:17:34.193270 1235603 cni.go:84] Creating CNI manager for ""
	I1024 20:17:34.193289 1235603 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 20:17:34.193303 1235603 start_flags.go:323] config:
	{Name:stopped-upgrade-825028 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-825028 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.197 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 20:17:34.195649 1235603 out.go:177] * Starting control plane node stopped-upgrade-825028 in cluster stopped-upgrade-825028
	I1024 20:17:34.197477 1235603 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 20:17:34.199569 1235603 out.go:177] * Pulling base image ...
	I1024 20:17:34.201467 1235603 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1024 20:17:34.201550 1235603 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1024 20:17:34.219685 1235603 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1024 20:17:34.219707 1235603 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1024 20:17:34.391562 1235603 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1024 20:17:34.391728 1235603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/stopped-upgrade-825028/config.json ...
	I1024 20:17:34.391820 1235603 cache.go:107] acquiring lock: {Name:mk86a42edbe5cf42a5e9b9e663fc0ed5c1abe176 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.391909 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 20:17:34.391918 1235603 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.295µs
	I1024 20:17:34.391928 1235603 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 20:17:34.391939 1235603 cache.go:107] acquiring lock: {Name:mkcd9edfd9d42acc1960f5575a65ab2fd17d3349 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.391968 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1024 20:17:34.391973 1235603 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 36.677µs
	I1024 20:17:34.391983 1235603 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1024 20:17:34.391985 1235603 cache.go:195] Successfully downloaded all kic artifacts
	I1024 20:17:34.391992 1235603 cache.go:107] acquiring lock: {Name:mk386e305fc452ab37fa9598d9ed1735e45ad989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392017 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1024 20:17:34.392022 1235603 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.778µs
	I1024 20:17:34.392029 1235603 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1024 20:17:34.392023 1235603 start.go:365] acquiring machines lock for stopped-upgrade-825028: {Name:mkb2774d93e04273b65b51b5731f8cc612cac656 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392037 1235603 cache.go:107] acquiring lock: {Name:mkf8d87607e47b1752dab854516d4a5cc43c6fbc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392063 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1024 20:17:34.392062 1235603 start.go:369] acquired machines lock for "stopped-upgrade-825028" in 23.942µs
	I1024 20:17:34.392067 1235603 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 30.728µs
	I1024 20:17:34.392074 1235603 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1024 20:17:34.392076 1235603 start.go:96] Skipping create...Using existing machine configuration
	I1024 20:17:34.392082 1235603 fix.go:54] fixHost starting: 
	I1024 20:17:34.392082 1235603 cache.go:107] acquiring lock: {Name:mkffc4ba0e858609be826598a474f4728612c093 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392106 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1024 20:17:34.392112 1235603 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 31.105µs
	I1024 20:17:34.392118 1235603 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1024 20:17:34.392126 1235603 cache.go:107] acquiring lock: {Name:mk1511b28308048d2c648338b4bda0ebac4238b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392149 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1024 20:17:34.392158 1235603 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.136µs
	I1024 20:17:34.392164 1235603 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1024 20:17:34.392171 1235603 cache.go:107] acquiring lock: {Name:mk253abc197bf8cc0e29889985b2ac4be67efcb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392194 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1024 20:17:34.392199 1235603 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.66µs
	I1024 20:17:34.392205 1235603 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1024 20:17:34.392212 1235603 cache.go:107] acquiring lock: {Name:mk12749433743920689878e1e69f0f68bf86cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 20:17:34.392236 1235603 cache.go:115] /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1024 20:17:34.392240 1235603 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.915µs
	I1024 20:17:34.392246 1235603 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1024 20:17:34.392252 1235603 cache.go:87] Successfully saved all images to host disk.
	I1024 20:17:34.392331 1235603 cli_runner.go:164] Run: docker container inspect stopped-upgrade-825028 --format={{.State.Status}}
	I1024 20:17:34.409987 1235603 fix.go:102] recreateIfNeeded on stopped-upgrade-825028: state=Stopped err=<nil>
	W1024 20:17:34.410023 1235603 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 20:17:34.412881 1235603 out.go:177] * Restarting existing docker container for "stopped-upgrade-825028" ...
	I1024 20:17:34.415528 1235603 cli_runner.go:164] Run: docker start stopped-upgrade-825028
	I1024 20:17:34.744002 1235603 cli_runner.go:164] Run: docker container inspect stopped-upgrade-825028 --format={{.State.Status}}
	I1024 20:17:34.765866 1235603 kic.go:427] container "stopped-upgrade-825028" state is running.
	I1024 20:17:34.766234 1235603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-825028
	I1024 20:17:34.785953 1235603 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/stopped-upgrade-825028/config.json ...
	I1024 20:17:34.786188 1235603 machine.go:88] provisioning docker machine ...
	I1024 20:17:34.786209 1235603 ubuntu.go:169] provisioning hostname "stopped-upgrade-825028"
	I1024 20:17:34.786265 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:34.806329 1235603 main.go:141] libmachine: Using SSH client type: native
	I1024 20:17:34.806742 1235603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34392 <nil> <nil>}
	I1024 20:17:34.806754 1235603 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-825028 && echo "stopped-upgrade-825028" | sudo tee /etc/hostname
	I1024 20:17:34.807828 1235603 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1024 20:17:37.961565 1235603 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-825028
	
	I1024 20:17:37.961674 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:37.981539 1235603 main.go:141] libmachine: Using SSH client type: native
	I1024 20:17:37.981989 1235603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34392 <nil> <nil>}
	I1024 20:17:37.982013 1235603 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-825028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-825028/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-825028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 20:17:38.122638 1235603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 20:17:38.122665 1235603 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17485-1112248/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-1112248/.minikube}
	I1024 20:17:38.122692 1235603 ubuntu.go:177] setting up certificates
	I1024 20:17:38.122700 1235603 provision.go:83] configureAuth start
	I1024 20:17:38.122760 1235603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-825028
	I1024 20:17:38.141196 1235603 provision.go:138] copyHostCerts
	I1024 20:17:38.141257 1235603 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem, removing ...
	I1024 20:17:38.141285 1235603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem
	I1024 20:17:38.141363 1235603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.pem (1082 bytes)
	I1024 20:17:38.141469 1235603 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem, removing ...
	I1024 20:17:38.141478 1235603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem
	I1024 20:17:38.141506 1235603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/cert.pem (1123 bytes)
	I1024 20:17:38.141577 1235603 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem, removing ...
	I1024 20:17:38.141584 1235603 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem
	I1024 20:17:38.141609 1235603 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-1112248/.minikube/key.pem (1675 bytes)
	I1024 20:17:38.141666 1235603 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-825028 san=[192.168.59.197 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-825028]
	I1024 20:17:38.410188 1235603 provision.go:172] copyRemoteCerts
	I1024 20:17:38.410252 1235603 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 20:17:38.410298 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:38.433237 1235603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/stopped-upgrade-825028/id_rsa Username:docker}
	I1024 20:17:38.531311 1235603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 20:17:38.557214 1235603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 20:17:38.585755 1235603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 20:17:38.610458 1235603 provision.go:86] duration metric: configureAuth took 487.742514ms
	I1024 20:17:38.610487 1235603 ubuntu.go:193] setting minikube options for container-runtime
	I1024 20:17:38.610676 1235603 config.go:182] Loaded profile config "stopped-upgrade-825028": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1024 20:17:38.610789 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:38.636042 1235603 main.go:141] libmachine: Using SSH client type: native
	I1024 20:17:38.636453 1235603 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aed00] 0x3b1470 <nil>  [] 0s} 127.0.0.1 34392 <nil> <nil>}
	I1024 20:17:38.636477 1235603 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1024 20:17:39.092233 1235603 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1024 20:17:39.092253 1235603 machine.go:91] provisioned docker machine in 4.306048431s
	I1024 20:17:39.092264 1235603 start.go:300] post-start starting for "stopped-upgrade-825028" (driver="docker")
	I1024 20:17:39.092280 1235603 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 20:17:39.092356 1235603 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 20:17:39.092395 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:39.123930 1235603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/stopped-upgrade-825028/id_rsa Username:docker}
	I1024 20:17:39.235444 1235603 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 20:17:39.240202 1235603 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1024 20:17:39.240229 1235603 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1024 20:17:39.240240 1235603 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1024 20:17:39.240247 1235603 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1024 20:17:39.240257 1235603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/addons for local assets ...
	I1024 20:17:39.240312 1235603 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-1112248/.minikube/files for local assets ...
	I1024 20:17:39.240390 1235603 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem -> 11176342.pem in /etc/ssl/certs
	I1024 20:17:39.240492 1235603 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 20:17:39.249912 1235603 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/ssl/certs/11176342.pem --> /etc/ssl/certs/11176342.pem (1708 bytes)
	I1024 20:17:39.277166 1235603 start.go:303] post-start completed in 184.868891ms
	I1024 20:17:39.277307 1235603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:17:39.277366 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:39.301666 1235603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/stopped-upgrade-825028/id_rsa Username:docker}
	I1024 20:17:39.401055 1235603 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1024 20:17:39.407628 1235603 fix.go:56] fixHost completed within 5.015537234s
	I1024 20:17:39.407648 1235603 start.go:83] releasing machines lock for "stopped-upgrade-825028", held for 5.015577767s
	I1024 20:17:39.407713 1235603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-825028
	I1024 20:17:39.440918 1235603 ssh_runner.go:195] Run: cat /version.json
	I1024 20:17:39.440970 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:39.441214 1235603 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 20:17:39.441331 1235603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-825028
	I1024 20:17:39.476804 1235603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/stopped-upgrade-825028/id_rsa Username:docker}
	I1024 20:17:39.495266 1235603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34392 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/stopped-upgrade-825028/id_rsa Username:docker}
	W1024 20:17:39.582776 1235603 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 20:17:39.582925 1235603 ssh_runner.go:195] Run: systemctl --version
	I1024 20:17:39.658779 1235603 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1024 20:17:39.825852 1235603 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1024 20:17:39.831771 1235603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:17:39.853160 1235603 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1024 20:17:39.853248 1235603 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 20:17:39.885198 1235603 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 20:17:39.885217 1235603 start.go:472] detecting cgroup driver to use...
	I1024 20:17:39.885248 1235603 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1024 20:17:39.885296 1235603 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 20:17:39.911234 1235603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 20:17:39.922813 1235603 docker.go:198] disabling cri-docker service (if available) ...
	I1024 20:17:39.922909 1235603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1024 20:17:39.934124 1235603 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1024 20:17:39.945361 1235603 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1024 20:17:39.957502 1235603 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1024 20:17:39.957575 1235603 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1024 20:17:40.074613 1235603 docker.go:214] disabling docker service ...
	I1024 20:17:40.074688 1235603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1024 20:17:40.087894 1235603 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1024 20:17:40.099962 1235603 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1024 20:17:40.213343 1235603 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1024 20:17:40.328323 1235603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1024 20:17:40.340184 1235603 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 20:17:40.357290 1235603 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1024 20:17:40.357399 1235603 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1024 20:17:40.371655 1235603 out.go:177] 
	W1024 20:17:40.374061 1235603 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1024 20:17:40.374086 1235603 out.go:239] * 
	* 
	W1024 20:17:40.375049 1235603 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 20:17:40.377910 1235603 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-825028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (87.49s)

                                                
                                    

Test pass (267/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.02
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.3/json-events 13
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.63
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 159.56
27 TestAddons/parallel/Registry 14.89
29 TestAddons/parallel/InspektorGadget 10.87
30 TestAddons/parallel/MetricsServer 5.93
34 TestAddons/parallel/Headlamp 12.23
35 TestAddons/parallel/CloudSpanner 6
36 TestAddons/parallel/LocalPath 10.43
37 TestAddons/parallel/NvidiaDevicePlugin 5.75
40 TestAddons/serial/GCPAuth/Namespaces 0.18
41 TestAddons/StoppedEnableDisable 12.45
42 TestCertOptions 34.44
43 TestCertExpiration 270.8
45 TestForceSystemdFlag 39.14
46 TestForceSystemdEnv 40.38
52 TestErrorSpam/setup 30.11
53 TestErrorSpam/start 0.91
54 TestErrorSpam/status 1.15
55 TestErrorSpam/pause 1.92
56 TestErrorSpam/unpause 1.96
57 TestErrorSpam/stop 1.52
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 75.59
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 40
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.11
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
69 TestFunctional/serial/CacheCmd/cache/add_local 1.18
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.07
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.38
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
74 TestFunctional/serial/CacheCmd/cache/delete 0.15
75 TestFunctional/serial/MinikubeKubectlCmd 0.17
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 32.39
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.85
80 TestFunctional/serial/LogsFileCmd 1.83
81 TestFunctional/serial/InvalidService 4.43
83 TestFunctional/parallel/ConfigCmd 0.62
84 TestFunctional/parallel/DashboardCmd 10.52
85 TestFunctional/parallel/DryRun 0.51
86 TestFunctional/parallel/InternationalLanguage 0.24
87 TestFunctional/parallel/StatusCmd 1.19
91 TestFunctional/parallel/ServiceCmdConnect 48.66
92 TestFunctional/parallel/AddonsCmd 0.17
95 TestFunctional/parallel/SSHCmd 0.85
96 TestFunctional/parallel/CpCmd 1.69
98 TestFunctional/parallel/FileSync 0.31
99 TestFunctional/parallel/CertSync 2
103 TestFunctional/parallel/NodeLabels 0.1
105 TestFunctional/parallel/NonActiveRuntimeDisabled 1.03
107 TestFunctional/parallel/License 0.38
108 TestFunctional/parallel/Version/short 0.08
109 TestFunctional/parallel/Version/components 0.93
111 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
119 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
120 TestFunctional/parallel/ImageCommands/Setup 1.64
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.93
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.42
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.93
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.31
127 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
129 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
130 TestFunctional/parallel/ServiceCmd/List 0.56
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
133 TestFunctional/parallel/ServiceCmd/Format 0.44
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
141 TestFunctional/parallel/MountCmd/any-port 35.2
142 TestFunctional/parallel/MountCmd/specific-port 1.91
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.34
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/delete_addon-resizer_images 0.08
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
154 TestIngressAddonLegacy/StartLegacyK8sCluster 99.68
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.73
161 TestJSONOutput/start/Command 77.13
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.82
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.76
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 6.02
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.26
186 TestKicCustomNetwork/create_custom_network 48.87
187 TestKicCustomNetwork/use_default_bridge_network 34.13
188 TestKicExistingNetwork 37.13
189 TestKicCustomSubnet 38.77
190 TestKicStaticIP 34.95
191 TestMainNoArgs 0.07
192 TestMinikubeProfile 68.3
195 TestMountStart/serial/StartWithMountFirst 9.57
196 TestMountStart/serial/VerifyMountFirst 0.32
197 TestMountStart/serial/StartWithMountSecond 9.61
198 TestMountStart/serial/VerifyMountSecond 0.31
199 TestMountStart/serial/DeleteFirst 1.69
200 TestMountStart/serial/VerifyMountPostDelete 0.3
201 TestMountStart/serial/Stop 1.25
202 TestMountStart/serial/RestartStopped 8.01
203 TestMountStart/serial/VerifyMountPostStop 0.3
206 TestMultiNode/serial/FreshStart2Nodes 125.36
207 TestMultiNode/serial/DeployApp2Nodes 6.73
209 TestMultiNode/serial/AddNode 20.77
210 TestMultiNode/serial/ProfileList 0.36
211 TestMultiNode/serial/CopyFile 11.35
212 TestMultiNode/serial/StopNode 2.41
213 TestMultiNode/serial/StartAfterStop 12
214 TestMultiNode/serial/RestartKeepsNodes 119.65
215 TestMultiNode/serial/DeleteNode 5.2
216 TestMultiNode/serial/StopMultiNode 24.15
217 TestMultiNode/serial/RestartMultiNode 83.96
218 TestMultiNode/serial/ValidateNameConflict 37.26
223 TestPreload 148.12
225 TestScheduledStopUnix 112
228 TestInsufficientStorage 11.58
231 TestKubernetesUpgrade 409.05
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
235 TestNoKubernetes/serial/StartWithK8s 43.27
236 TestNoKubernetes/serial/StartWithStopK8s 8.75
237 TestNoKubernetes/serial/Start 10.09
238 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
239 TestNoKubernetes/serial/ProfileList 0.94
240 TestNoKubernetes/serial/Stop 1.29
241 TestNoKubernetes/serial/StartNoArgs 7.91
242 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
243 TestStoppedBinaryUpgrade/Setup 1.01
245 TestStoppedBinaryUpgrade/MinikubeLogs 0.69
254 TestPause/serial/Start 54.3
255 TestPause/serial/SecondStartNoReconfiguration 37.3
256 TestPause/serial/Pause 1.1
257 TestPause/serial/VerifyStatus 0.39
258 TestPause/serial/Unpause 0.76
259 TestPause/serial/PauseAgain 1.04
260 TestPause/serial/DeletePaused 2.99
261 TestPause/serial/VerifyDeletedResources 8.2
269 TestNetworkPlugins/group/false 5.54
274 TestStartStop/group/old-k8s-version/serial/FirstStart 134.43
275 TestStartStop/group/old-k8s-version/serial/DeployApp 12.58
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.02
277 TestStartStop/group/old-k8s-version/serial/Stop 12.2
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
279 TestStartStop/group/old-k8s-version/serial/SecondStart 78.88
281 TestStartStop/group/no-preload/serial/FirstStart 70.63
282 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
283 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
284 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.65
285 TestStartStop/group/old-k8s-version/serial/Pause 4.64
287 TestStartStop/group/embed-certs/serial/FirstStart 82.34
288 TestStartStop/group/no-preload/serial/DeployApp 8.65
289 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.04
290 TestStartStop/group/no-preload/serial/Stop 12.52
291 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
292 TestStartStop/group/no-preload/serial/SecondStart 348.65
293 TestStartStop/group/embed-certs/serial/DeployApp 9.62
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
295 TestStartStop/group/embed-certs/serial/Stop 12.15
296 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
297 TestStartStop/group/embed-certs/serial/SecondStart 369.55
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
301 TestStartStop/group/no-preload/serial/Pause 3.61
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.21
304 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.05
305 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
306 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
307 TestStartStop/group/embed-certs/serial/Pause 3.64
309 TestStartStop/group/newest-cni/serial/FirstStart 46.14
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.54
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.46
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 354.18
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
317 TestStartStop/group/newest-cni/serial/Stop 1.31
318 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
319 TestStartStop/group/newest-cni/serial/SecondStart 32.12
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
323 TestStartStop/group/newest-cni/serial/Pause 3.28
324 TestNetworkPlugins/group/auto/Start 90
325 TestNetworkPlugins/group/auto/KubeletFlags 0.36
326 TestNetworkPlugins/group/auto/NetCatPod 10.35
327 TestNetworkPlugins/group/auto/DNS 0.22
328 TestNetworkPlugins/group/auto/Localhost 0.2
329 TestNetworkPlugins/group/auto/HairPin 0.19
330 TestNetworkPlugins/group/kindnet/Start 51.86
331 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
332 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
333 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
334 TestNetworkPlugins/group/kindnet/DNS 0.23
335 TestNetworkPlugins/group/kindnet/Localhost 0.2
336 TestNetworkPlugins/group/kindnet/HairPin 0.19
337 TestNetworkPlugins/group/calico/Start 71.01
338 TestNetworkPlugins/group/calico/ControllerPod 5.05
339 TestNetworkPlugins/group/calico/KubeletFlags 0.35
340 TestNetworkPlugins/group/calico/NetCatPod 13.53
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 11.03
342 TestNetworkPlugins/group/calico/DNS 0.29
343 TestNetworkPlugins/group/calico/Localhost 0.24
344 TestNetworkPlugins/group/calico/HairPin 0.23
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.52
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.54
348 TestNetworkPlugins/group/custom-flannel/Start 76.96
349 TestNetworkPlugins/group/enable-default-cni/Start 80.89
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
352 TestNetworkPlugins/group/custom-flannel/DNS 0.22
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.43
357 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
358 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
359 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
360 TestNetworkPlugins/group/flannel/Start 70.72
361 TestNetworkPlugins/group/bridge/Start 51.17
362 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
363 TestNetworkPlugins/group/bridge/NetCatPod 10.34
364 TestNetworkPlugins/group/flannel/ControllerPod 5.05
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
366 TestNetworkPlugins/group/flannel/NetCatPod 10.33
367 TestNetworkPlugins/group/bridge/DNS 0.29
368 TestNetworkPlugins/group/bridge/Localhost 0.31
369 TestNetworkPlugins/group/bridge/HairPin 0.31
370 TestNetworkPlugins/group/flannel/DNS 0.22
371 TestNetworkPlugins/group/flannel/Localhost 0.25
372 TestNetworkPlugins/group/flannel/HairPin 0.27
x
+
TestDownloadOnly/v1.16.0/json-events (14.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-654862 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-654862 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.021405547s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-654862
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-654862: exit status 85 (92.204671ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-654862 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |          |
	|         | -p download-only-654862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:23:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:23:29.084717 1117639 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:23:29.084868 1117639 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:29.084878 1117639 out.go:309] Setting ErrFile to fd 2...
	I1024 19:23:29.084884 1117639 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:29.085143 1117639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	W1024 19:23:29.085284 1117639 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-1112248/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-1112248/.minikube/config/config.json: no such file or directory
	I1024 19:23:29.085704 1117639 out.go:303] Setting JSON to true
	I1024 19:23:29.086824 1117639 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32758,"bootTime":1698142651,"procs":423,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:23:29.086895 1117639 start.go:138] virtualization:  
	I1024 19:23:29.091047 1117639 out.go:97] [download-only-654862] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:23:29.093435 1117639 out.go:169] MINIKUBE_LOCATION=17485
	W1024 19:23:29.091303 1117639 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball: no such file or directory
	I1024 19:23:29.091379 1117639 notify.go:220] Checking for updates...
	I1024 19:23:29.095391 1117639 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:23:29.097498 1117639 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:23:29.099547 1117639 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:23:29.102000 1117639 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1024 19:23:29.106584 1117639 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:23:29.106875 1117639 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:23:29.130202 1117639 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:23:29.130306 1117639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:29.207907 1117639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-24 19:23:29.198594141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:29.208015 1117639 docker.go:295] overlay module found
	I1024 19:23:29.210135 1117639 out.go:97] Using the docker driver based on user configuration
	I1024 19:23:29.210161 1117639 start.go:298] selected driver: docker
	I1024 19:23:29.210167 1117639 start.go:902] validating driver "docker" against <nil>
	I1024 19:23:29.210287 1117639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:29.279154 1117639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-24 19:23:29.269841775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:29.279327 1117639 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:23:29.279650 1117639 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1024 19:23:29.279817 1117639 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1024 19:23:29.281915 1117639 out.go:169] Using Docker driver with root privileges
	I1024 19:23:29.284037 1117639 cni.go:84] Creating CNI manager for ""
	I1024 19:23:29.284055 1117639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:23:29.284067 1117639 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1024 19:23:29.284083 1117639 start_flags.go:323] config:
	{Name:download-only-654862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-654862 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:23:29.286228 1117639 out.go:97] Starting control plane node download-only-654862 in cluster download-only-654862
	I1024 19:23:29.286245 1117639 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:23:29.288256 1117639 out.go:97] Pulling base image ...
	I1024 19:23:29.288284 1117639 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:23:29.288384 1117639 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:23:29.305112 1117639 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:23:29.305307 1117639 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:23:29.305401 1117639 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:23:29.362268 1117639 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1024 19:23:29.362292 1117639 cache.go:57] Caching tarball of preloaded images
	I1024 19:23:29.362451 1117639 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1024 19:23:29.365381 1117639 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1024 19:23:29.365402 1117639 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:23:29.481505 1117639 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1024 19:23:34.426820 1117639 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-654862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-654862 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-654862 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.998507219s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (13.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-654862
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-654862: exit status 85 (93.534741ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-654862 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |          |
	|         | -p download-only-654862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-654862 | jenkins | v1.31.2 | 24 Oct 23 19:23 UTC |          |
	|         | -p download-only-654862        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:23:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:23:43.199074 1117712 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:23:43.199331 1117712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:43.199342 1117712 out.go:309] Setting ErrFile to fd 2...
	I1024 19:23:43.199349 1117712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:23:43.199667 1117712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	W1024 19:23:43.199862 1117712 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-1112248/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-1112248/.minikube/config/config.json: no such file or directory
	I1024 19:23:43.200168 1117712 out.go:303] Setting JSON to true
	I1024 19:23:43.201369 1117712 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":32773,"bootTime":1698142651,"procs":384,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:23:43.201447 1117712 start.go:138] virtualization:  
	I1024 19:23:43.204032 1117712 out.go:97] [download-only-654862] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:23:43.206262 1117712 out.go:169] MINIKUBE_LOCATION=17485
	I1024 19:23:43.204389 1117712 notify.go:220] Checking for updates...
	I1024 19:23:43.210584 1117712 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:23:43.212559 1117712 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:23:43.214560 1117712 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:23:43.216610 1117712 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1024 19:23:43.220656 1117712 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:23:43.221233 1117712 config.go:182] Loaded profile config "download-only-654862": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1024 19:23:43.221301 1117712 start.go:810] api.Load failed for download-only-654862: filestore "download-only-654862": Docker machine "download-only-654862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:23:43.221415 1117712 driver.go:378] Setting default libvirt URI to qemu:///system
	W1024 19:23:43.221443 1117712 start.go:810] api.Load failed for download-only-654862: filestore "download-only-654862": Docker machine "download-only-654862" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:23:43.245929 1117712 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:23:43.246077 1117712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:43.329280 1117712 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:43.31940018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:43.329388 1117712 docker.go:295] overlay module found
	I1024 19:23:43.331353 1117712 out.go:97] Using the docker driver based on existing profile
	I1024 19:23:43.331388 1117712 start.go:298] selected driver: docker
	I1024 19:23:43.331395 1117712 start.go:902] validating driver "docker" against &{Name:download-only-654862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-654862 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:23:43.331589 1117712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:23:43.397893 1117712 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-24 19:23:43.388762457 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:23:43.398331 1117712 cni.go:84] Creating CNI manager for ""
	I1024 19:23:43.398351 1117712 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1024 19:23:43.398362 1117712 start_flags.go:323] config:
	{Name:download-only-654862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-654862 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1024 19:23:43.400592 1117712 out.go:97] Starting control plane node download-only-654862 in cluster download-only-654862
	I1024 19:23:43.400617 1117712 cache.go:122] Beginning downloading kic base image for docker with crio
	I1024 19:23:43.402668 1117712 out.go:97] Pulling base image ...
	I1024 19:23:43.402700 1117712 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:23:43.402878 1117712 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local docker daemon
	I1024 19:23:43.420930 1117712 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 to local cache
	I1024 19:23:43.421058 1117712 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory
	I1024 19:23:43.421082 1117712 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 in local cache directory, skipping pull
	I1024 19:23:43.421087 1117712 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 exists in cache, skipping pull
	I1024 19:23:43.421095 1117712 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 as a tarball
	I1024 19:23:43.480018 1117712 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1024 19:23:43.480042 1117712 cache.go:57] Caching tarball of preloaded images
	I1024 19:23:43.480201 1117712 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1024 19:23:43.482560 1117712 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1024 19:23:43.482589 1117712 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1024 19:23:43.619997 1117712 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/17485-1112248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-654862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-654862
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-775727 --alsologtostderr --binary-mirror http://127.0.0.1:38809 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-775727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-775727
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-228070
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-228070: exit status 85 (86.089083ms)

                                                
                                                
-- stdout --
	* Profile "addons-228070" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-228070"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-228070
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-228070: exit status 85 (103.767491ms)

                                                
                                                
-- stdout --
	* Profile "addons-228070" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-228070"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (159.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-228070 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-228070 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m39.563297633s)
--- PASS: TestAddons/Setup (159.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 48.720125ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-chlmt" [1869d1d7-07f4-4d9c-94d6-4bcc1e8efe3a] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.028074225s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xdq2s" [16223b37-cd2a-41d2-8ebd-ee2c4fcef1a2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.027791391s
addons_test.go:339: (dbg) Run:  kubectl --context addons-228070 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-228070 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-228070 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.608733187s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9529c" [22180a2d-39bb-4c79-874b-1378443c3d67] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01924872s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-228070
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-228070: (5.853207045s)
--- PASS: TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.782721ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fgmf7" [de24d5b2-08eb-4c8a-9c9b-3d6eb76712d8] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012667502s
addons_test.go:414: (dbg) Run:  kubectl --context addons-228070 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-228070 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-228070 --alsologtostderr -v=1: (1.194646068s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-tn68w" [1485d773-ce38-4719-83bf-04feafec8b66] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-tn68w" [1485d773-ce38-4719-83bf-04feafec8b66] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.036566376s
--- PASS: TestAddons/parallel/Headlamp (12.23s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-kvmjz" [da2fb636-bd76-410d-b977-c6d633126a08] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.023317336s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-228070
--- PASS: TestAddons/parallel/CloudSpanner (6.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-228070 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-228070 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cd5554e3-d1c7-4f60-9306-1617d732dc40] Pending
helpers_test.go:344: "test-local-path" [cd5554e3-d1c7-4f60-9306-1617d732dc40] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cd5554e3-d1c7-4f60-9306-1617d732dc40] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
2023/10/24 19:26:51 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "test-local-path" [cd5554e3-d1c7-4f60-9306-1617d732dc40] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.015271717s
addons_test.go:890: (dbg) Run:  kubectl --context addons-228070 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 ssh "cat /opt/local-path-provisioner/pvc-320b3b4e-2781-4009-93c4-e0f32e3a5a23_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-228070 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-228070 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-228070 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vnscp" [638ff2b2-e718-4d5a-aa20-ab6d29a35186] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.075668726s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-228070
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-228070 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-228070 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-228070
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-228070: (12.13258561s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-228070
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-228070
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-228070
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (34.44s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-975545 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-975545 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.619627578s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-975545 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-975545 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-975545 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-975545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-975545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-975545: (2.070936101s)
--- PASS: TestCertOptions (34.44s)

                                                
                                    
x
+
TestCertExpiration (270.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-829611 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1024 20:21:37.740809 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-829611 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.938240948s)
E1024 20:22:37.185365 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-829611 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-829611 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (48.043896627s)
helpers_test.go:175: Cleaning up "cert-expiration-829611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-829611
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-829611: (2.817740151s)
--- PASS: TestCertExpiration (270.80s)

                                                
                                    
x
+
TestForceSystemdFlag (39.14s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-850104 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-850104 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.084076137s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-850104 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-850104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-850104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-850104: (2.645675507s)
--- PASS: TestForceSystemdFlag (39.14s)

                                                
                                    
x
+
TestForceSystemdEnv (40.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-560039 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-560039 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.424491445s)
helpers_test.go:175: Cleaning up "force-systemd-env-560039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-560039
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-560039: (2.959680345s)
--- PASS: TestForceSystemdEnv (40.38s)

                                                
                                    
x
+
TestErrorSpam/setup (30.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-611849 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-611849 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-611849 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-611849 --driver=docker  --container-runtime=crio: (30.105822404s)
--- PASS: TestErrorSpam/setup (30.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 start --dry-run
--- PASS: TestErrorSpam/start (0.91s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 pause
E1024 19:36:37.739875 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:37.746952 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:37.757187 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:37.777444 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:37.817664 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:37.897904 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 pause
E1024 19:36:38.058680 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:38.378943 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 pause
E1024 19:36:39.019992 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
--- PASS: TestErrorSpam/pause (1.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.96s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 unpause
E1024 19:36:40.300761 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 unpause
--- PASS: TestErrorSpam/unpause (1.96s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 stop: (1.279108228s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-611849 --log_dir /tmp/nospam-611849 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17485-1112248/.minikube/files/etc/test/nested/copy/1117634/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.59s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1024 19:36:47.983104 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:36:58.223551 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:37:18.704379 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 19:37:59.664960 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-419430 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.58822851s)
--- PASS: TestFunctional/serial/StartWithProxy (75.59s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-419430 --alsologtostderr -v=8: (39.994374357s)
functional_test.go:659: soft start took 39.99490128s for "functional-419430" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-419430 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:3.1: (1.314679589s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:3.3: (1.331001065s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 cache add registry.k8s.io/pause:latest: (1.228578016s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-419430 /tmp/TestFunctionalserialCacheCmdcacheadd_local3961268467/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache add minikube-local-cache-test:functional-419430
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache delete minikube-local-cache-test:functional-419430
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-419430
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (339.821691ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 cache reload: (1.057138256s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 kubectl -- --context functional-419430 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-419430 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.39s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1024 19:39:21.585650 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-419430 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.393062947s)
functional_test.go:757: restart took 32.393156132s for "functional-419430" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.39s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-419430 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 logs: (1.849566752s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 logs --file /tmp/TestFunctionalserialLogsFileCmd2701018813/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 logs --file /tmp/TestFunctionalserialLogsFileCmd2701018813/001/logs.txt: (1.823274523s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.83s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.43s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-419430 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-419430
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-419430: exit status 115 (599.529543ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31251 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-419430 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 config get cpus: exit status 14 (96.987535ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 config get cpus: exit status 14 (101.399322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-419430 --alsologtostderr -v=1]
2023/10/24 19:44:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-419430 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1144394: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-419430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (222.308002ms)

                                                
                                                
-- stdout --
	* [functional-419430] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:44:43.348210 1144179 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:44:43.348411 1144179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:44:43.348419 1144179 out.go:309] Setting ErrFile to fd 2...
	I1024 19:44:43.348425 1144179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:44:43.348689 1144179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:44:43.349118 1144179 out.go:303] Setting JSON to false
	I1024 19:44:43.350080 1144179 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34033,"bootTime":1698142651,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:44:43.350157 1144179 start.go:138] virtualization:  
	I1024 19:44:43.353034 1144179 out.go:177] * [functional-419430] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 19:44:43.356013 1144179 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:44:43.356208 1144179 notify.go:220] Checking for updates...
	I1024 19:44:43.359148 1144179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:44:43.361079 1144179 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:44:43.363291 1144179 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:44:43.365258 1144179 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:44:43.367992 1144179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:44:43.370958 1144179 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:44:43.371578 1144179 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:44:43.395789 1144179 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:44:43.395901 1144179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:44:43.483515 1144179 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-24 19:44:43.473229111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:44:43.483624 1144179 docker.go:295] overlay module found
	I1024 19:44:43.485705 1144179 out.go:177] * Using the docker driver based on existing profile
	I1024 19:44:43.487641 1144179 start.go:298] selected driver: docker
	I1024 19:44:43.487659 1144179 start.go:902] validating driver "docker" against &{Name:functional-419430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:44:43.487774 1144179 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:44:43.490230 1144179 out.go:177] 
	W1024 19:44:43.491951 1144179 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1024 19:44:43.493681 1144179 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-419430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-419430 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (237.406335ms)

                                                
                                                
-- stdout --
	* [functional-419430] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:44:43.122394 1144138 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:44:43.122558 1144138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:44:43.122566 1144138 out.go:309] Setting ErrFile to fd 2...
	I1024 19:44:43.122572 1144138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:44:43.122925 1144138 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 19:44:43.123269 1144138 out.go:303] Setting JSON to false
	I1024 19:44:43.124125 1144138 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":34033,"bootTime":1698142651,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 19:44:43.124195 1144138 start.go:138] virtualization:  
	I1024 19:44:43.126970 1144138 out.go:177] * [functional-419430] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1024 19:44:43.129565 1144138 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:44:43.131288 1144138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:44:43.129722 1144138 notify.go:220] Checking for updates...
	I1024 19:44:43.135093 1144138 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 19:44:43.137031 1144138 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 19:44:43.139088 1144138 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 19:44:43.141211 1144138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:44:43.143485 1144138 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 19:44:43.144139 1144138 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:44:43.169895 1144138 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 19:44:43.170033 1144138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 19:44:43.259635 1144138 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-24 19:44:43.248517321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 19:44:43.259738 1144138 docker.go:295] overlay module found
	I1024 19:44:43.262318 1144138 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1024 19:44:43.264181 1144138 start.go:298] selected driver: docker
	I1024 19:44:43.264201 1144138 start.go:902] validating driver "docker" against &{Name:functional-419430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-419430 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:44:43.264327 1144138 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:44:43.266800 1144138 out.go:177] 
	W1024 19:44:43.268831 1144138 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1024 19:44:43.271117 1144138 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (48.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-419430 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-419430 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wjhzm" [68576db0-eb3d-469d-a686-7e3263772c33] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-wjhzm" [68576db0-eb3d-469d-a686-7e3263772c33] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 48.014254241s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31811
functional_test.go:1674: http://192.168.49.2:31811: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-wjhzm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31811
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (48.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh -n functional-419430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 cp functional-419430:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3508738083/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh -n functional-419430 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1117634/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /etc/test/nested/copy/1117634/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1117634.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /etc/ssl/certs/1117634.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1117634.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /usr/share/ca-certificates/1117634.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11176342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /etc/ssl/certs/11176342.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11176342.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /usr/share/ca-certificates/11176342.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-419430 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "sudo systemctl is-active docker": exit status 1 (497.284243ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "sudo systemctl is-active containerd": exit status 1 (531.840531ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1139943: os: process already finished
helpers_test.go:502: unable to terminate pid 1139782: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-419430 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-419430
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-419430 image ls --format short --alsologtostderr:
I1024 19:44:55.384077 1144673 out.go:296] Setting OutFile to fd 1 ...
I1024 19:44:55.384321 1144673 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:55.384335 1144673 out.go:309] Setting ErrFile to fd 2...
I1024 19:44:55.384342 1144673 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:55.384630 1144673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
I1024 19:44:55.385340 1144673 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:55.385516 1144673 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:55.386137 1144673 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
I1024 19:44:55.404842 1144673 ssh_runner.go:195] Run: systemctl --version
I1024 19:44:55.404911 1144673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
I1024 19:44:55.423656 1144673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
I1024 19:44:55.519513 1144673 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-419430 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 537e9a59ee2fd | 121MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | a5dd5cdd6d3ef | 69.9MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 8276439b4f237 | 117MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/google-containers/addon-resizer  | functional-419430  | ffd4cfbbe753e | 34.1MB |
| localhost/my-image                      | functional-419430  | 9546564cd6fa1 | 1.64MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 42a4e73724daa | 59.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-419430 image ls --format table --alsologtostderr:
I1024 19:44:58.952988 1144992 out.go:296] Setting OutFile to fd 1 ...
I1024 19:44:58.953148 1144992 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:58.953158 1144992 out.go:309] Setting ErrFile to fd 2...
I1024 19:44:58.953164 1144992 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:58.953429 1144992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
I1024 19:44:58.954102 1144992 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:58.954244 1144992 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:58.954884 1144992 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
I1024 19:44:58.976480 1144992 ssh_runner.go:195] Run: systemctl --version
I1024 19:44:58.976544 1144992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
I1024 19:44:58.997324 1144992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
I1024 19:44:59.095300 1144992 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-419430 image ls --format json --alsologtostderr:
[{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"69926807"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["regi
stry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"9546564cd6fa19e53350c3f8a28192a4675db2759062d91e4708b4732c7ce544","repoDigests":["localhost/my-image@sha256:3bbf35829e6879ca477f3317966991786d8ee6fda3ba4e3cc1ac9c833c4be242"],"repoTags":["localhost/my-image:functional-419430"],"size":"1640226"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"1ecd702cb70b59b53dfa8caf859bc4383fc31055415f27519caf3e2c7db89a11","repoDigests":["docker.io/library/50401e64e4cb8e9105333cb981543a734eca4b0ed5937c62cdc5646ff607a967-tmp@sha256:7f6372e09b7526e035819545a6
9fe8911c72990b7aaa55759af8e519e01d93c2"],"repoTags":[],"size":"1637644"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"121054158"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"]
,"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"ff
d4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-419430"],"size":"34114467"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"8276439b4f237dda1f7820b0fcef6
00bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"117252916"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"59188020"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33e
c45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-419430 image ls --format json --alsologtostderr:
I1024 19:44:58.667169 1144965 out.go:296] Setting OutFile to fd 1 ...
I1024 19:44:58.667497 1144965 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:58.667531 1144965 out.go:309] Setting ErrFile to fd 2...
I1024 19:44:58.667559 1144965 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:58.667948 1144965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
I1024 19:44:58.668923 1144965 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:58.669166 1144965 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:58.669972 1144965 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
I1024 19:44:58.695525 1144965 ssh_runner.go:195] Run: systemctl --version
I1024 19:44:58.695584 1144965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
I1024 19:44:58.716429 1144965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
I1024 19:44:58.811569 1144965 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-419430 image ls --format yaml --alsologtostderr:
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-419430
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "59188020"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "121054158"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "117252916"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "69926807"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-419430 image ls --format yaml --alsologtostderr:
I1024 19:44:55.633958 1144700 out.go:296] Setting OutFile to fd 1 ...
I1024 19:44:55.634113 1144700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:55.634121 1144700 out.go:309] Setting ErrFile to fd 2...
I1024 19:44:55.634128 1144700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:55.634400 1144700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
I1024 19:44:55.635022 1144700 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:55.635154 1144700 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:55.635640 1144700 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
I1024 19:44:55.658242 1144700 ssh_runner.go:195] Run: systemctl --version
I1024 19:44:55.658317 1144700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
I1024 19:44:55.676467 1144700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
I1024 19:44:55.771577 1144700 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh pgrep buildkitd: exit status 1 (309.640771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image build -t localhost/my-image:functional-419430 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 image build -t localhost/my-image:functional-419430 testdata/build --alsologtostderr: (2.192368836s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-419430 image build -t localhost/my-image:functional-419430 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1ecd702cb70
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-419430
--> 9546564cd6f
Successfully tagged localhost/my-image:functional-419430
9546564cd6fa19e53350c3f8a28192a4675db2759062d91e4708b4732c7ce544
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-419430 image build -t localhost/my-image:functional-419430 testdata/build --alsologtostderr:
I1024 19:44:56.214949 1144776 out.go:296] Setting OutFile to fd 1 ...
I1024 19:44:56.215892 1144776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:56.215907 1144776 out.go:309] Setting ErrFile to fd 2...
I1024 19:44:56.215914 1144776 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:44:56.216216 1144776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
I1024 19:44:56.216991 1144776 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:56.217646 1144776 config.go:182] Loaded profile config "functional-419430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1024 19:44:56.218191 1144776 cli_runner.go:164] Run: docker container inspect functional-419430 --format={{.State.Status}}
I1024 19:44:56.239490 1144776 ssh_runner.go:195] Run: systemctl --version
I1024 19:44:56.239540 1144776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-419430
I1024 19:44:56.262568 1144776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34220 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/functional-419430/id_rsa Username:docker}
I1024 19:44:56.359608 1144776 build_images.go:151] Building image from path: /tmp/build.2918310527.tar
I1024 19:44:56.359679 1144776 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1024 19:44:56.370700 1144776 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2918310527.tar
I1024 19:44:56.375201 1144776 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2918310527.tar: stat -c "%s %y" /var/lib/minikube/build/build.2918310527.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2918310527.tar': No such file or directory
I1024 19:44:56.375240 1144776 ssh_runner.go:362] scp /tmp/build.2918310527.tar --> /var/lib/minikube/build/build.2918310527.tar (3072 bytes)
I1024 19:44:56.404720 1144776 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2918310527
I1024 19:44:56.418155 1144776 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2918310527 -xf /var/lib/minikube/build/build.2918310527.tar
I1024 19:44:56.431032 1144776 crio.go:297] Building image: /var/lib/minikube/build/build.2918310527
I1024 19:44:56.431090 1144776 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-419430 /var/lib/minikube/build/build.2918310527 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1024 19:44:58.293371 1144776 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-419430 /var/lib/minikube/build/build.2918310527 --cgroup-manager=cgroupfs: (1.862255176s)
I1024 19:44:58.293432 1144776 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2918310527
I1024 19:44:58.307070 1144776 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2918310527.tar
I1024 19:44:58.318900 1144776 build_images.go:207] Built localhost/my-image:functional-419430 from /tmp/build.2918310527.tar
I1024 19:44:58.318935 1144776 build_images.go:123] succeeded building to: functional-419430
I1024 19:44:58.318940 1144776 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.622023379s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-419430
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr: (3.64979554s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr: (2.741873581s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.584664243s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-419430
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 image load --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr: (3.539917921s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image save gcr.io/google-containers/addon-resizer:functional-419430 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image rm gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-419430 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.060095095s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-419430
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 image save --daemon gcr.io/google-containers/addon-resizer:functional-419430 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-419430
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-419430 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-419430 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-464n9" [225423e1-4b42-4b95-92f7-7e5cf0b48374] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-464n9" [225423e1-4b42-4b95-92f7-7e5cf0b48374] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01384729s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service list -o json
functional_test.go:1493: Took "553.686285ms" to run "out/minikube-linux-arm64 -p functional-419430 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30300
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30300
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "365.892304ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "72.568401ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "366.178276ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "126.401603ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (35.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdany-port3472141859/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698176642093815665" to /tmp/TestFunctionalparallelMountCmdany-port3472141859/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698176642093815665" to /tmp/TestFunctionalparallelMountCmdany-port3472141859/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698176642093815665" to /tmp/TestFunctionalparallelMountCmdany-port3472141859/001/test-1698176642093815665
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (419.214345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 24 19:44 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 24 19:44 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 24 19:44 test-1698176642093815665
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh cat /mount-9p/test-1698176642093815665
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-419430 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c64630da-f9fe-4a48-a050-1d7c509c99dc] Pending
helpers_test.go:344: "busybox-mount" [c64630da-f9fe-4a48-a050-1d7c509c99dc] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c64630da-f9fe-4a48-a050-1d7c509c99dc] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c64630da-f9fe-4a48-a050-1d7c509c99dc] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 32.017286817s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-419430 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdany-port3472141859/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (35.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdspecific-port899145957/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (391.847979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdspecific-port899145957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "sudo umount -f /mount-9p": exit status 1 (308.175597ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-419430 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdspecific-port899145957/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T" /mount1: exit status 1 (769.660848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-419430 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-419430 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-419430 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4283781909/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-419430 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-419430
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-419430
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-419430
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-989906 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1024 19:46:37.740272 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-989906 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m39.68177134s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.68s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-989906 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-760390 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1024 19:55:01.824060 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-760390 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.1253059s)
--- PASS: TestJSONOutput/start/Command (77.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-760390 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.82s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-760390 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-760390 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-760390 --output=json --user=testUser: (6.020718451s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-535363 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-535363 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.299432ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a33fc32-a6ad-43d0-9eb6-dff0ff988d23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-535363] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4841d408-18a2-4fe5-842c-0c5e5eb4a269","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"0b57c8fa-456e-4cd4-a062-a422c832b96c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"51ee4ce2-c17e-4a3a-baf9-83b08488726e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig"}}
	{"specversion":"1.0","id":"4b9019e2-1071-4437-a983-b41ef084614e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube"}}
	{"specversion":"1.0","id":"0e97b9e6-68a1-4405-ade2-eebe6edf2ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f0bfd40c-41dd-4fa7-b974-47bdac661946","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"311874fe-6d24-4970-86c9-988f8d164e9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-535363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-535363
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-985552 --network=
E1024 19:56:37.740069 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-985552 --network=: (46.597329721s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-985552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-985552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-985552: (2.256617388s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.87s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-891135 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-891135 --network=bridge: (32.047008005s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-891135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-891135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-891135: (2.053984555s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.13s)

                                                
                                    
x
+
TestKicExistingNetwork (37.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-358469 --network=existing-network
E1024 19:58:00.663230 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.668559 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.678794 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.699136 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.739434 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.819770 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:00.980204 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:01.300385 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:01.941186 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:03.222262 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-358469 --network=existing-network: (34.958799713s)
helpers_test.go:175: Cleaning up "existing-network-358469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-358469
E1024 19:58:05.782604 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-358469: (2.006937691s)
--- PASS: TestKicExistingNetwork (37.13s)

                                                
                                    
x
+
TestKicCustomSubnet (38.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-065297 --subnet=192.168.60.0/24
E1024 19:58:10.903303 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:21.143451 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:58:41.624168 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-065297 --subnet=192.168.60.0/24: (36.599042396s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-065297 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-065297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-065297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-065297: (2.147189416s)
--- PASS: TestKicCustomSubnet (38.77s)

                                                
                                    
x
+
TestKicStaticIP (34.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-614389 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-614389 --static-ip=192.168.200.200: (32.624981745s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-614389 ip
helpers_test.go:175: Cleaning up "static-ip-614389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-614389
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-614389: (2.130767355s)
--- PASS: TestKicStaticIP (34.95s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.3s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-842003 --driver=docker  --container-runtime=crio
E1024 19:59:22.585862 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 19:59:34.139668 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-842003 --driver=docker  --container-runtime=crio: (32.418565865s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-844522 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-844522 --driver=docker  --container-runtime=crio: (30.476604757s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-842003
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-844522
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-844522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-844522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-844522: (2.043547089s)
helpers_test.go:175: Cleaning up "first-842003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-842003
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-842003: (1.994659756s)
--- PASS: TestMinikubeProfile (68.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-028799 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-028799 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.566367188s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-028799 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-035191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1024 20:00:44.506910 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-035191 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.611257761s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035191 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-028799 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-028799 --alsologtostderr -v=5: (1.687864515s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035191 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-035191
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-035191: (1.24675997s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-035191
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-035191: (7.011758286s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-035191 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-773966 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1024 20:01:37.740824 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 20:03:00.663779 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-773966 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.786032583s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-773966 -- rollout status deployment/busybox: (4.394237669s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-c622k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-773966 -- exec busybox-5bc68d56bd-wldjb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-773966 -v 3 --alsologtostderr
E1024 20:03:28.347071 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-773966 -v 3 --alsologtostderr: (20.015062098s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.77s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp testdata/cp-test.txt multinode-773966:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile529084509/001/cp-test_multinode-773966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966:/home/docker/cp-test.txt multinode-773966-m02:/home/docker/cp-test_multinode-773966_multinode-773966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test_multinode-773966_multinode-773966-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966:/home/docker/cp-test.txt multinode-773966-m03:/home/docker/cp-test_multinode-773966_multinode-773966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test_multinode-773966_multinode-773966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp testdata/cp-test.txt multinode-773966-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile529084509/001/cp-test_multinode-773966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m02:/home/docker/cp-test.txt multinode-773966:/home/docker/cp-test_multinode-773966-m02_multinode-773966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test_multinode-773966-m02_multinode-773966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m02:/home/docker/cp-test.txt multinode-773966-m03:/home/docker/cp-test_multinode-773966-m02_multinode-773966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test_multinode-773966-m02_multinode-773966-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp testdata/cp-test.txt multinode-773966-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile529084509/001/cp-test_multinode-773966-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m03:/home/docker/cp-test.txt multinode-773966:/home/docker/cp-test_multinode-773966-m03_multinode-773966.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966 "sudo cat /home/docker/cp-test_multinode-773966-m03_multinode-773966.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 cp multinode-773966-m03:/home/docker/cp-test.txt multinode-773966-m02:/home/docker/cp-test_multinode-773966-m03_multinode-773966-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 ssh -n multinode-773966-m02 "sudo cat /home/docker/cp-test_multinode-773966-m03_multinode-773966-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-773966 node stop m03: (1.256601538s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-773966 status: exit status 7 (578.952544ms)

                                                
                                                
-- stdout --
	multinode-773966
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773966-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773966-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr: exit status 7 (571.714205ms)

                                                
                                                
-- stdout --
	multinode-773966
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-773966-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-773966-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:03:52.080359 1190662 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:03:52.080495 1190662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:03:52.081290 1190662 out.go:309] Setting ErrFile to fd 2...
	I1024 20:03:52.081318 1190662 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:03:52.081808 1190662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:03:52.082006 1190662 out.go:303] Setting JSON to false
	I1024 20:03:52.082137 1190662 notify.go:220] Checking for updates...
	I1024 20:03:52.082096 1190662 mustload.go:65] Loading cluster: multinode-773966
	I1024 20:03:52.083327 1190662 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:03:52.083340 1190662 status.go:255] checking status of multinode-773966 ...
	I1024 20:03:52.083843 1190662 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:03:52.103122 1190662 status.go:330] multinode-773966 host status = "Running" (err=<nil>)
	I1024 20:03:52.103147 1190662 host.go:66] Checking if "multinode-773966" exists ...
	I1024 20:03:52.103438 1190662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966
	I1024 20:03:52.122008 1190662 host.go:66] Checking if "multinode-773966" exists ...
	I1024 20:03:52.122307 1190662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:03:52.122394 1190662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966
	I1024 20:03:52.146359 1190662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34285 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966/id_rsa Username:docker}
	I1024 20:03:52.244311 1190662 ssh_runner.go:195] Run: systemctl --version
	I1024 20:03:52.249793 1190662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:03:52.263180 1190662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:03:52.342128 1190662 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-24 20:03:52.332208268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:03:52.342718 1190662 kubeconfig.go:92] found "multinode-773966" server: "https://192.168.58.2:8443"
	I1024 20:03:52.342754 1190662 api_server.go:166] Checking apiserver status ...
	I1024 20:03:52.342794 1190662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 20:03:52.354831 1190662 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1261/cgroup
	I1024 20:03:52.366028 1190662 api_server.go:182] apiserver freezer: "13:freezer:/docker/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/crio/crio-a4657e40ce5e0e784287a6a642839ca9276fe61b60bcd73794e0e8f4ff30cc96"
	I1024 20:03:52.366097 1190662 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/94e7e8f6e06d3113db4de57f9253671649596f6c8bf1d58e126aea4e351cbe30/crio/crio-a4657e40ce5e0e784287a6a642839ca9276fe61b60bcd73794e0e8f4ff30cc96/freezer.state
	I1024 20:03:52.376065 1190662 api_server.go:204] freezer state: "THAWED"
	I1024 20:03:52.376090 1190662 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1024 20:03:52.384995 1190662 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1024 20:03:52.385030 1190662 status.go:421] multinode-773966 apiserver status = Running (err=<nil>)
	I1024 20:03:52.385043 1190662 status.go:257] multinode-773966 status: &{Name:multinode-773966 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 20:03:52.385060 1190662 status.go:255] checking status of multinode-773966-m02 ...
	I1024 20:03:52.385379 1190662 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Status}}
	I1024 20:03:52.403659 1190662 status.go:330] multinode-773966-m02 host status = "Running" (err=<nil>)
	I1024 20:03:52.403685 1190662 host.go:66] Checking if "multinode-773966-m02" exists ...
	I1024 20:03:52.403976 1190662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-773966-m02
	I1024 20:03:52.422080 1190662 host.go:66] Checking if "multinode-773966-m02" exists ...
	I1024 20:03:52.422382 1190662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 20:03:52.422428 1190662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-773966-m02
	I1024 20:03:52.445360 1190662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34290 SSHKeyPath:/home/jenkins/minikube-integration/17485-1112248/.minikube/machines/multinode-773966-m02/id_rsa Username:docker}
	I1024 20:03:52.544605 1190662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:03:52.558091 1190662 status.go:257] multinode-773966-m02 status: &{Name:multinode-773966-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1024 20:03:52.558124 1190662 status.go:255] checking status of multinode-773966-m03 ...
	I1024 20:03:52.558422 1190662 cli_runner.go:164] Run: docker container inspect multinode-773966-m03 --format={{.State.Status}}
	I1024 20:03:52.576929 1190662 status.go:330] multinode-773966-m03 host status = "Stopped" (err=<nil>)
	I1024 20:03:52.576955 1190662 status.go:343] host is not running, skipping remaining checks
	I1024 20:03:52.576963 1190662 status.go:257] multinode-773966-m03 status: &{Name:multinode-773966-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-773966 node start m03 --alsologtostderr: (11.152061833s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (119.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-773966
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-773966
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-773966: (25.072067394s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-773966 --wait=true -v=8 --alsologtostderr
E1024 20:04:34.140387 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 20:05:57.184612 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-773966 --wait=true -v=8 --alsologtostderr: (1m34.403476836s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-773966
--- PASS: TestMultiNode/serial/RestartKeepsNodes (119.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-773966 node delete m03: (4.415192247s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-773966 stop: (23.931198305s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-773966 status: exit status 7 (110.614581ms)

                                                
                                                
-- stdout --
	multinode-773966
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-773966-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr: exit status 7 (109.11039ms)

                                                
                                                
-- stdout --
	multinode-773966
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-773966-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:06:33.541428 1198629 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:06:33.541632 1198629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:06:33.541643 1198629 out.go:309] Setting ErrFile to fd 2...
	I1024 20:06:33.541650 1198629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:06:33.541937 1198629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:06:33.542124 1198629 out.go:303] Setting JSON to false
	I1024 20:06:33.542215 1198629 mustload.go:65] Loading cluster: multinode-773966
	I1024 20:06:33.542304 1198629 notify.go:220] Checking for updates...
	I1024 20:06:33.542651 1198629 config.go:182] Loaded profile config "multinode-773966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:06:33.542669 1198629 status.go:255] checking status of multinode-773966 ...
	I1024 20:06:33.543167 1198629 cli_runner.go:164] Run: docker container inspect multinode-773966 --format={{.State.Status}}
	I1024 20:06:33.561728 1198629 status.go:330] multinode-773966 host status = "Stopped" (err=<nil>)
	I1024 20:06:33.561791 1198629 status.go:343] host is not running, skipping remaining checks
	I1024 20:06:33.561799 1198629 status.go:257] multinode-773966 status: &{Name:multinode-773966 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 20:06:33.561823 1198629 status.go:255] checking status of multinode-773966-m02 ...
	I1024 20:06:33.562125 1198629 cli_runner.go:164] Run: docker container inspect multinode-773966-m02 --format={{.State.Status}}
	I1024 20:06:33.581357 1198629 status.go:330] multinode-773966-m02 host status = "Stopped" (err=<nil>)
	I1024 20:06:33.581378 1198629 status.go:343] host is not running, skipping remaining checks
	I1024 20:06:33.581385 1198629 status.go:257] multinode-773966-m02 status: &{Name:multinode-773966-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-773966 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1024 20:06:37.740385 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-773966 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m23.159695846s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-773966 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.96s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-773966
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-773966-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-773966-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.673207ms)

                                                
                                                
-- stdout --
	* [multinode-773966-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-773966-m02' is duplicated with machine name 'multinode-773966-m02' in profile 'multinode-773966'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-773966-m03 --driver=docker  --container-runtime=crio
E1024 20:08:00.663767 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-773966-m03 --driver=docker  --container-runtime=crio: (34.688808168s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-773966
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-773966: exit status 80 (368.050103ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-773966
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-773966-m03 already exists in multinode-773966-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-773966-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-773966-m03: (2.030720269s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.26s)

                                                
                                    
x
+
TestPreload (148.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-088785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1024 20:09:34.140519 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 20:09:40.786862 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-088785 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m21.425731849s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-088785 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-088785 image pull gcr.io/k8s-minikube/busybox: (1.905889188s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-088785
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-088785: (5.903588249s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-088785 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-088785 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.265350489s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-088785 image list
helpers_test.go:175: Cleaning up "test-preload-088785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-088785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-088785: (2.355826437s)
--- PASS: TestPreload (148.12s)

                                                
                                    
x
+
TestScheduledStopUnix (112s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-689367 --memory=2048 --driver=docker  --container-runtime=crio
E1024 20:11:37.740002 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-689367 --memory=2048 --driver=docker  --container-runtime=crio: (34.86380483s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-689367 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-689367 -n scheduled-stop-689367
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-689367 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-689367 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-689367 -n scheduled-stop-689367
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-689367
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-689367 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-689367
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-689367: exit status 7 (89.623925ms)

                                                
                                                
-- stdout --
	scheduled-stop-689367
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-689367 -n scheduled-stop-689367
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-689367 -n scheduled-stop-689367: exit status 7 (88.109257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-689367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-689367
E1024 20:13:00.663547 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-689367: (5.327550442s)
--- PASS: TestScheduledStopUnix (112.00s)

                                                
                                    
x
+
TestInsufficientStorage (11.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-003418 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-003418 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.89799445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"50ad2439-2771-40bf-94fe-ae4cc38dda2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-003418] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ddab38ba-83f6-41d7-8647-7e6e44a97dea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"b7b67cf0-7509-4924-ad2d-5de4f6b12113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"229bdfd5-6211-425f-8548-a18e7e2e8104","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig"}}
	{"specversion":"1.0","id":"a19bc209-1161-4be8-a7a5-98e4c88f4815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube"}}
	{"specversion":"1.0","id":"ffabc59f-4863-432e-b9a8-b0cdb3e6e406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9e0877d1-81d9-462c-af23-87374d188dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2e2be569-2a8b-44d9-89a9-98d3b9d91eb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e4d07bc3-0293-4022-9285-3829e3a6f899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a5f41f00-4a0c-4db1-a84c-e860f0b4d9a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"76cd7590-190b-4faf-bd9d-50a7b47ff21a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"be7313ef-4686-487a-a196-45a2ee8ec876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-003418 in cluster insufficient-storage-003418","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5cf62061-6ac6-40fe-b870-b934e5703b9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff8c2bb7-9996-4629-942c-810b9ef75cea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bb1de03-dbf4-49b7-9f64-29a2b5ad5c32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-003418 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-003418 --output=json --layout=cluster: exit status 7 (327.031706ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-003418","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-003418","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:13:10.600284 1215616 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-003418" does not appear in /home/jenkins/minikube-integration/17485-1112248/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-003418 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-003418 --output=json --layout=cluster: exit status 7 (336.676636ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-003418","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-003418","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 20:13:10.936931 1215671 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-003418" does not appear in /home/jenkins/minikube-integration/17485-1112248/kubeconfig
	E1024 20:13:10.949313 1215671 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/insufficient-storage-003418/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-003418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-003418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-003418: (2.020480546s)
--- PASS: TestInsufficientStorage (11.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (409.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1024 20:14:34.140658 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.833674754s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-216640
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-216640: (1.300210525s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-216640 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-216640 status --format={{.Host}}: exit status 7 (153.943766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m41.889342026s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-216640 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (128.189232ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-216640] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-216640
	    minikube start -p kubernetes-upgrade-216640 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2166402 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-216640 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-216640 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.926201902s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-216640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-216640
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-216640: (2.630026079s)
--- PASS: TestKubernetesUpgrade (409.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.53333ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-531374] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531374 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531374 --driver=docker  --container-runtime=crio: (42.664191095s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-531374 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --driver=docker  --container-runtime=crio: (5.956900699s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-531374 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-531374 status -o json: exit status 2 (588.390552ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-531374","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-531374
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-531374: (2.207388315s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531374 --no-kubernetes --driver=docker  --container-runtime=crio: (10.092464698s)
--- PASS: TestNoKubernetes/serial/Start (10.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-531374 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-531374 "sudo systemctl is-active --quiet service kubelet": exit status 1 (389.965321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-531374
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-531374: (1.288916881s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531374 --driver=docker  --container-runtime=crio
E1024 20:14:23.711065 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531374 --driver=docker  --container-runtime=crio: (7.912283133s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-531374 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-531374 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.752451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-825028
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.69s)

                                                
                                    
x
+
TestPause/serial/Start (54.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-894951 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1024 20:19:34.139822 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-894951 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.295490076s)
--- PASS: TestPause/serial/Start (54.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.3s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-894951 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-894951 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.259973601s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.30s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-894951 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-894951 --alsologtostderr -v=5: (1.098987197s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-894951 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-894951 --output=json --layout=cluster: exit status 2 (392.357371ms)

                                                
                                                
-- stdout --
	{"Name":"pause-894951","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-894951","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-894951 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-894951 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-894951 --alsologtostderr -v=5: (1.036081396s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-894951 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-894951 --alsologtostderr -v=5: (2.98747436s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (8.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (8.121772593s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-894951
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-894951: exit status 1 (23.05798ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-894951: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-793094 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-793094 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (354.006641ms)

                                                
                                                
-- stdout --
	* [false-793094] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 20:21:22.837083 1254847 out.go:296] Setting OutFile to fd 1 ...
	I1024 20:21:22.837269 1254847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:21:22.837299 1254847 out.go:309] Setting ErrFile to fd 2...
	I1024 20:21:22.837320 1254847 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 20:21:22.837599 1254847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-1112248/.minikube/bin
	I1024 20:21:22.838037 1254847 out.go:303] Setting JSON to false
	I1024 20:21:22.839029 1254847 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36232,"bootTime":1698142651,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1048-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1024 20:21:22.839428 1254847 start.go:138] virtualization:  
	I1024 20:21:22.842589 1254847 out.go:177] * [false-793094] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1024 20:21:22.844312 1254847 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 20:21:22.846302 1254847 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 20:21:22.844535 1254847 notify.go:220] Checking for updates...
	I1024 20:21:22.848656 1254847 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-1112248/kubeconfig
	I1024 20:21:22.851547 1254847 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-1112248/.minikube
	I1024 20:21:22.853240 1254847 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1024 20:21:22.855040 1254847 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 20:21:22.857451 1254847 config.go:182] Loaded profile config "force-systemd-flag-850104": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1024 20:21:22.857550 1254847 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 20:21:22.908213 1254847 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1024 20:21:22.908313 1254847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1024 20:21:23.068326 1254847 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-24 20:21:23.055703032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1048-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1024 20:21:23.068451 1254847 docker.go:295] overlay module found
	I1024 20:21:23.070759 1254847 out.go:177] * Using the docker driver based on user configuration
	I1024 20:21:23.073025 1254847 start.go:298] selected driver: docker
	I1024 20:21:23.073042 1254847 start.go:902] validating driver "docker" against <nil>
	I1024 20:21:23.073120 1254847 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 20:21:23.077204 1254847 out.go:177] 
	W1024 20:21:23.079722 1254847 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1024 20:21:23.081911 1254847 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-793094 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17485-1112248/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Oct 2023 20:21:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-850104
contexts:
- context:
cluster: force-systemd-flag-850104
extensions:
- extension:
last-update: Tue, 24 Oct 2023 20:21:21 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: force-systemd-flag-850104
name: force-systemd-flag-850104
current-context: force-systemd-flag-850104
kind: Config
preferences: {}
users:
- name: force-systemd-flag-850104
user:
client-certificate: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/force-systemd-flag-850104/client.crt
client-key: /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/force-systemd-flag-850104/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-793094

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-793094"

                                                
                                                
----------------------- debugLogs end: false-793094 [took: 4.964130089s] --------------------------------
helpers_test.go:175: Cleaning up "false-793094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-793094
--- PASS: TestNetworkPlugins/group/false (5.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-495318 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1024 20:23:00.663499 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 20:24:34.139911 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-495318 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m14.43400928s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-495318 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8915c9d0-e396-4b96-8607-67394c4e6676] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8915c9d0-e396-4b96-8607-67394c4e6676] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.030634833s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-495318 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-495318 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-495318 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-495318 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-495318 --alsologtostderr -v=3: (12.200199911s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495318 -n old-k8s-version-495318
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495318 -n old-k8s-version-495318: exit status 7 (93.481307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-495318 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (78.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-495318 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-495318 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m18.353896872s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-495318 -n old-k8s-version-495318
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (78.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-099082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:26:20.788020 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 20:26:37.740595 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-099082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m10.634396482s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rj8qb" [78d13f34-aefe-4a64-b6e1-677c35035110] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027699408s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-rj8qb" [78d13f34-aefe-4a64-b6e1-677c35035110] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011924437s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-495318 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-495318 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-495318 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-495318 --alsologtostderr -v=1: (1.343346492s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495318 -n old-k8s-version-495318
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495318 -n old-k8s-version-495318: exit status 2 (478.99296ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495318 -n old-k8s-version-495318
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495318 -n old-k8s-version-495318: exit status 2 (421.23579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-495318 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-495318 -n old-k8s-version-495318
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-495318 -n old-k8s-version-495318
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-934037 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-934037 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m22.344117888s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-099082 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dfa177c7-2946-4e80-95de-dd9a2cb8aa20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dfa177c7-2946-4e80-95de-dd9a2cb8aa20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.03590833s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-099082 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-099082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-099082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.831952113s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-099082 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-099082 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-099082 --alsologtostderr -v=3: (12.518514987s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-099082 -n no-preload-099082
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-099082 -n no-preload-099082: exit status 7 (119.714705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-099082 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-099082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:28:00.662857 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-099082 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m47.989705619s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-099082 -n no-preload-099082
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-934037 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [930bb999-01b4-49af-a5d6-b2e5671f2f7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [930bb999-01b4-49af-a5d6-b2e5671f2f7a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.033903378s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-934037 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-934037 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-934037 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073355121s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-934037 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-934037 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-934037 --alsologtostderr -v=3: (12.153666362s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-934037 -n embed-certs-934037
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-934037 -n embed-certs-934037: exit status 7 (89.228033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-934037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (369.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-934037 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:29:34.139693 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 20:29:55.782496 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:55.787889 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:55.798145 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:55.818388 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:55.858625 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:55.939131 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:56.099481 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:56.420147 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:57.060867 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:29:58.341105 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:30:00.901264 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:30:06.021476 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:30:16.261814 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:30:36.742116 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:31:03.712222 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 20:31:17.702896 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:31:37.740521 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 20:32:39.623330 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
E1024 20:33:00.663204 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-934037 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (6m9.112104801s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-934037 -n embed-certs-934037
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (369.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqw5r" [fecdd2af-3de0-4e48-a68c-b4af6216efd7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqw5r" [fecdd2af-3de0-4e48-a68c-b4af6216efd7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.030322597s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-gqw5r" [fecdd2af-3de0-4e48-a68c-b4af6216efd7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01032731s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-099082 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-099082 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-099082 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-099082 -n no-preload-099082
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-099082 -n no-preload-099082: exit status 2 (385.066426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-099082 -n no-preload-099082
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-099082 -n no-preload-099082: exit status 2 (385.12067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-099082 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-099082 -n no-preload-099082
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-099082 -n no-preload-099082
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-603725 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1024 20:34:34.139941 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-603725 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m25.209281135s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-59dhp" [4815e4d4-47f0-4b00-a3a2-d2529d64ca01] Running
E1024 20:34:55.782306 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.042956251s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-59dhp" [4815e4d4-47f0-4b00-a3a2-d2529d64ca01] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010466451s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-934037 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-934037 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-934037 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-934037 -n embed-certs-934037
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-934037 -n embed-certs-934037: exit status 2 (397.29175ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-934037 -n embed-certs-934037
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-934037 -n embed-certs-934037: exit status 2 (362.767603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-934037 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-934037 -n embed-certs-934037
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-934037 -n embed-certs-934037
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-146035 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-146035 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (46.138213166s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-603725 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d5abcf67-cf33-47c7-aa1b-1d774190a2c7] Pending
helpers_test.go:344: "busybox" [d5abcf67-cf33-47c7-aa1b-1d774190a2c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1024 20:35:23.463882 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d5abcf67-cf33-47c7-aa1b-1d774190a2c7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.029179498s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-603725 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-603725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-603725 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.319417892s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-603725 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-603725 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-603725 --alsologtostderr -v=3: (12.303340215s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725: exit status 7 (144.356093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-603725 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-603725 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-603725 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m53.743171468s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (354.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-146035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-146035 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.336245163s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-146035 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-146035 --alsologtostderr -v=3: (1.307610747s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146035 -n newest-cni-146035
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146035 -n newest-cni-146035: exit status 7 (95.196603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-146035 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-146035 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-146035 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (31.717193358s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-146035 -n newest-cni-146035
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-146035 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-146035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146035 -n newest-cni-146035
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146035 -n newest-cni-146035: exit status 2 (370.616257ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146035 -n newest-cni-146035
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146035 -n newest-cni-146035: exit status 2 (414.790634ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-146035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-146035 -n newest-cni-146035
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-146035 -n newest-cni-146035
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1024 20:37:16.731522 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:16.736811 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:16.747178 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:16.767438 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:16.807702 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:16.888017 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:17.048651 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:17.369153 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:18.009793 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:19.290018 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:21.850838 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:26.971153 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:37.211372 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:37:57.692080 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:38:00.662965 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m30.0027935s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qjfmh" [ebdd0b57-618f-4b6f-b870-b629ff38efdb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qjfmh" [ebdd0b57-618f-4b6f-b870-b629ff38efdb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.0109443s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1024 20:39:17.186331 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.860312909s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qc8w4" [2c9b6cac-0d34-4965-9878-335266553024] Running
E1024 20:39:34.139937 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.029762427s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4kgsg" [ab86301b-80b2-4951-8de8-2e4212c7c9af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4kgsg" [ab86301b-80b2-4951-8de8-2e4212c7c9af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010483857s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.010998382s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dfk7b" [083a6527-f26c-4c6a-aab6-fda09e5c5f67] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.042521422s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hzgxq" [ed0bcfbb-19bc-45b0-9fb0-ad607c79595e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hzgxq" [ed0bcfbb-19bc-45b0-9fb0-ad607c79595e] Running
E1024 20:41:37.739952 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.015516702s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qpv7h" [f0ba5585-1653-438b-8251-b27d37f3bf16] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qpv7h" [f0ba5585-1653-438b-8251-b27d37f3bf16] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.025306244s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-qpv7h" [f0ba5585-1653-438b-8251-b27d37f3bf16] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011132363s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-603725 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-603725 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-603725 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-603725 --alsologtostderr -v=1: (1.274322727s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725: exit status 2 (519.311799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725: exit status 2 (499.896884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-603725 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-603725 --alsologtostderr -v=1: (1.338185347s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-603725 -n default-k8s-diff-port-603725
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.54s)
E1024 20:45:41.308786 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.955326334s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1024 20:42:16.731068 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:42:44.414337 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/no-preload-099082/client.crt: no such file or directory
E1024 20:43:00.663595 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/ingress-addon-legacy-989906/client.crt: no such file or directory
E1024 20:43:00.788618 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/addons-228070/client.crt: no such file or directory
E1024 20:43:09.371916 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.377437 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.387694 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.407925 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.448210 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.528472 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:09.689336 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:10.009826 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:10.650382 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:11.931417 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:14.492215 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:43:19.612995 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.887100141s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cjxzl" [d3fb7961-39d4-46a1-9e57-9ecdc7ec2e0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cjxzl" [d3fb7961-39d4-46a1-9e57-9ecdc7ec2e0d] Running
E1024 20:43:29.854082 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.010683928s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5pvp8" [286efe42-98c7-4ece-97f2-f0f86d7c53ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5pvp8" [286efe42-98c7-4ece-97f2-f0f86d7c53ed] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011970214s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (70.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m10.716489404s)
--- PASS: TestNetworkPlugins/group/flannel/Start (70.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1024 20:44:31.295612 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/auto-793094/client.crt: no such file or directory
E1024 20:44:33.907948 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:33.913516 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:33.923752 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:33.943899 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:33.984155 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:34.064387 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:34.139717 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/functional-419430/client.crt: no such file or directory
E1024 20:44:34.224864 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:34.545246 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:35.185670 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:36.466302 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:39.027338 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:44.147965 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:54.388581 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
E1024 20:44:55.782396 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/old-k8s-version-495318/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-793094 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (51.173951238s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-w95dm" [a294b250-9f91-4c51-940a-ebfd9202c57f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-w95dm" [a294b250-9f91-4c51-940a-ebfd9202c57f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010534345s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-krj6x" [1a23d239-5aa6-48d7-bd05-e496d1033487] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.049736321s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-793094 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-793094 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6lppg" [2718f2b7-ca19-4d32-bacb-da087e3a5c64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6lppg" [2718f2b7-ca19-4d32-bacb-da087e3a5c64] Running
E1024 20:45:20.826306 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:20.831522 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:20.841793 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:20.862023 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:20.902257 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:20.983423 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:21.143676 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:21.464268 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
E1024 20:45:22.105137 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.014371685s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-793094 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1024 20:45:14.869664 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/kindnet-793094/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-793094 exec deployment/netcat -- nslookup kubernetes.default
E1024 20:45:23.385818 1117634 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-1112248/.minikube/profiles/default-k8s-diff-port-603725/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-793094 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    

Test skip (29/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-959559 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-959559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-959559
--- SKIP: TestDownloadOnlyKic (0.66s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-032578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-032578
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-793094 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-793094

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-793094"

                                                
                                                
----------------------- debugLogs end: kubenet-793094 [took: 5.463029857s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-793094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-793094
--- SKIP: TestNetworkPlugins/group/kubenet (5.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-793094 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-793094" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-793094

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-793094" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-793094"

                                                
                                                
----------------------- debugLogs end: cilium-793094 [took: 6.42562128s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-793094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-793094
--- SKIP: TestNetworkPlugins/group/cilium (6.66s)

                                                
                                    
Copied to clipboard