Test Report: Docker_Linux_crio_arm64 17565

                    
                      8a42d885ed6317a7849bfdd99b0257f3ab4fbbcf:2023-11-09:31814
                    
                

Test fail (10/307)

x
+
TestAddons/parallel/Ingress (484.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-386274 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-386274 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-386274 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0e834f15-978e-44df-b1cd-629da375aa81] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
addons_test.go:249: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:249: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-386274 -n addons-386274
addons_test.go:249: TestAddons/parallel/Ingress: showing logs for failed pods as of 2023-11-09 21:40:34.250486344 +0000 UTC m=+758.992457196
addons_test.go:249: (dbg) Run:  kubectl --context addons-386274 describe po nginx -n default
addons_test.go:249: (dbg) kubectl --context addons-386274 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-386274/192.168.49.2
Start Time:       Thu, 09 Nov 2023 21:32:33 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lsfdp (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-lsfdp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-386274
Warning  Failed     5m52s (x2 over 7m30s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    5m2s (x4 over 8m)       kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     4m2s (x4 over 7m30s)    kubelet            Error: ErrImagePull
Warning  Failed     4m2s (x2 over 6m45s)    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     3m46s (x6 over 7m30s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m54s (x10 over 7m30s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:249: (dbg) Run:  kubectl --context addons-386274 logs nginx -n default
addons_test.go:249: (dbg) Non-zero exit: kubectl --context addons-386274 logs nginx -n default: exit status 1 (113.628003ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:249: kubectl --context addons-386274 logs nginx -n default: exit status 1
addons_test.go:250: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-386274
helpers_test.go:235: (dbg) docker inspect addons-386274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf",
	        "Created": "2023-11-09T21:28:43.15741634Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T21:28:43.475291963Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf/hostname",
	        "HostsPath": "/var/lib/docker/containers/656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf/hosts",
	        "LogPath": "/var/lib/docker/containers/656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf/656ed8ac374f86cd373095ed10791599ca6a7cb88a73af0eddf730289c552fdf-json.log",
	        "Name": "/addons-386274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-386274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-386274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9cb596637e3b1ed6399e024f75405c015b1508b4107ad11a90f8359f6d32a5d7-init/diff:/var/lib/docker/overlay2/7d8c4fc646533218e970cbbc2fae53265551a122428b3ce7f5ec8807d1cc9c68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9cb596637e3b1ed6399e024f75405c015b1508b4107ad11a90f8359f6d32a5d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9cb596637e3b1ed6399e024f75405c015b1508b4107ad11a90f8359f6d32a5d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9cb596637e3b1ed6399e024f75405c015b1508b4107ad11a90f8359f6d32a5d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-386274",
	                "Source": "/var/lib/docker/volumes/addons-386274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-386274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-386274",
	                "name.minikube.sigs.k8s.io": "addons-386274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "32603d4a4c3d8489f759eb12f3f7e9c2dddd322ada4cef0fdbdf5723d563289e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33675"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33674"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33671"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33673"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33672"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/32603d4a4c3d",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-386274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "656ed8ac374f",
	                        "addons-386274"
	                    ],
	                    "NetworkID": "f453da5e76a1efaad8ba4a0a94a52c20f8892a31b93c5ee7f20e438809af9bbc",
	                    "EndpointID": "7a9dbaf75c28df0487f92df9aba9295136789b7f5ddb8544177a9340ebc14eb1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-386274 -n addons-386274
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-386274 logs -n 25: (1.708424236s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-530486   | jenkins | v1.32.0 | 09 Nov 23 21:27 UTC |                     |
	|         | -p download-only-530486                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-530486   | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |                     |
	|         | -p download-only-530486                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:28 UTC |
	| delete  | -p download-only-530486                                                                     | download-only-530486   | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:28 UTC |
	| delete  | -p download-only-530486                                                                     | download-only-530486   | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:28 UTC |
	| start   | --download-only -p                                                                          | download-docker-254770 | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |                     |
	|         | download-docker-254770                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-254770                                                                   | download-docker-254770 | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:28 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-333375   | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |                     |
	|         | binary-mirror-333375                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33375                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-333375                                                                     | binary-mirror-333375   | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:28 UTC |
	| addons  | enable dashboard -p                                                                         | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |                     |
	|         | addons-386274                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |                     |
	|         | addons-386274                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-386274 --wait=true                                                                | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC | 09 Nov 23 21:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	|         | -p addons-386274                                                                            |                        |         |         |                     |                     |
	| ip      | addons-386274 ip                                                                            | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	| addons  | addons-386274 addons disable                                                                | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-386274 ssh cat                                                                       | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	|         | /opt/local-path-provisioner/pvc-2b37186f-28e0-4c99-bc25-8fa1ced967d3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-386274 addons disable                                                                | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:32 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	|         | addons-386274                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:31 UTC | 09 Nov 23 21:31 UTC |
	|         | -p addons-386274                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:32 UTC | 09 Nov 23 21:32 UTC |
	|         | addons-386274                                                                               |                        |         |         |                     |                     |
	| addons  | addons-386274 addons                                                                        | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:32 UTC | 09 Nov 23 21:32 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-386274 addons                                                                        | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:32 UTC | 09 Nov 23 21:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-386274 addons                                                                        | addons-386274          | jenkins | v1.32.0 | 09 Nov 23 21:32 UTC | 09 Nov 23 21:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:28:19
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:28:19.929463  714076 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:28:19.929598  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:28:19.929608  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:28:19.929614  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:28:19.929915  714076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:28:19.930365  714076 out.go:303] Setting JSON to false
	I1109 21:28:19.931447  714076 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15050,"bootTime":1699550250,"procs":426,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:28:19.931522  714076 start.go:138] virtualization:  
	I1109 21:28:19.933897  714076 out.go:177] * [addons-386274] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:28:19.936528  714076 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:28:19.938457  714076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:28:19.936615  714076 notify.go:220] Checking for updates...
	I1109 21:28:19.942200  714076 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:28:19.943785  714076 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:28:19.945537  714076 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:28:19.947108  714076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:28:19.949155  714076 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:28:19.972866  714076 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:28:19.972973  714076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:28:20.060779  714076 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-09 21:28:20.050833859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:28:20.060905  714076 docker.go:295] overlay module found
	I1109 21:28:20.063074  714076 out.go:177] * Using the docker driver based on user configuration
	I1109 21:28:20.065040  714076 start.go:298] selected driver: docker
	I1109 21:28:20.065061  714076 start.go:902] validating driver "docker" against <nil>
	I1109 21:28:20.065074  714076 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:28:20.065687  714076 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:28:20.129945  714076 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-09 21:28:20.120619534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:28:20.130114  714076 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1109 21:28:20.130370  714076 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 21:28:20.132212  714076 out.go:177] * Using Docker driver with root privileges
	I1109 21:28:20.134371  714076 cni.go:84] Creating CNI manager for ""
	I1109 21:28:20.134392  714076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:28:20.134405  714076 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 21:28:20.134421  714076 start_flags.go:323] config:
	{Name:addons-386274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-386274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:28:20.136554  714076 out.go:177] * Starting control plane node addons-386274 in cluster addons-386274
	I1109 21:28:20.138476  714076 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:28:20.140239  714076 out.go:177] * Pulling base image ...
	I1109 21:28:20.142137  714076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:20.142196  714076 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1109 21:28:20.142207  714076 cache.go:56] Caching tarball of preloaded images
	I1109 21:28:20.142236  714076 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:28:20.142288  714076 preload.go:174] Found /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 21:28:20.142298  714076 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1109 21:28:20.142757  714076 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/config.json ...
	I1109 21:28:20.142791  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/config.json: {Name:mk688aed15e00e8569d7980f11a7d190ae2a6840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:20.159498  714076 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1109 21:28:20.159621  714076 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1109 21:28:20.159646  714076 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory, skipping pull
	I1109 21:28:20.159663  714076 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in cache, skipping pull
	I1109 21:28:20.159680  714076 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	I1109 21:28:20.159686  714076 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 from local cache
	I1109 21:28:35.821508  714076 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 from cached tarball
	I1109 21:28:35.821544  714076 cache.go:194] Successfully downloaded all kic artifacts
	I1109 21:28:35.821610  714076 start.go:365] acquiring machines lock for addons-386274: {Name:mke95707f78b4d16b03e47e8e38c96341b7c9c5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 21:28:35.821719  714076 start.go:369] acquired machines lock for "addons-386274" in 87.983µs
	I1109 21:28:35.821751  714076 start.go:93] Provisioning new machine with config: &{Name:addons-386274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-386274 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:28:35.821843  714076 start.go:125] createHost starting for "" (driver="docker")
	I1109 21:28:35.824129  714076 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1109 21:28:35.824374  714076 start.go:159] libmachine.API.Create for "addons-386274" (driver="docker")
	I1109 21:28:35.824436  714076 client.go:168] LocalClient.Create starting
	I1109 21:28:35.824544  714076 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 21:28:36.104893  714076 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 21:28:36.466432  714076 cli_runner.go:164] Run: docker network inspect addons-386274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 21:28:36.483023  714076 cli_runner.go:211] docker network inspect addons-386274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 21:28:36.483104  714076 network_create.go:281] running [docker network inspect addons-386274] to gather additional debugging logs...
	I1109 21:28:36.483126  714076 cli_runner.go:164] Run: docker network inspect addons-386274
	W1109 21:28:36.499339  714076 cli_runner.go:211] docker network inspect addons-386274 returned with exit code 1
	I1109 21:28:36.499376  714076 network_create.go:284] error running [docker network inspect addons-386274]: docker network inspect addons-386274: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-386274 not found
	I1109 21:28:36.499391  714076 network_create.go:286] output of [docker network inspect addons-386274]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-386274 not found
	
	** /stderr **
	I1109 21:28:36.499495  714076 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:28:36.517320  714076 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024fe8b0}
	I1109 21:28:36.517358  714076 network_create.go:124] attempt to create docker network addons-386274 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 21:28:36.517416  714076 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-386274 addons-386274
	I1109 21:28:36.586395  714076 network_create.go:108] docker network addons-386274 192.168.49.0/24 created
	I1109 21:28:36.586430  714076 kic.go:121] calculated static IP "192.168.49.2" for the "addons-386274" container
	I1109 21:28:36.586512  714076 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 21:28:36.603748  714076 cli_runner.go:164] Run: docker volume create addons-386274 --label name.minikube.sigs.k8s.io=addons-386274 --label created_by.minikube.sigs.k8s.io=true
	I1109 21:28:36.621538  714076 oci.go:103] Successfully created a docker volume addons-386274
	I1109 21:28:36.621631  714076 cli_runner.go:164] Run: docker run --rm --name addons-386274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386274 --entrypoint /usr/bin/test -v addons-386274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1109 21:28:38.772298  714076 cli_runner.go:217] Completed: docker run --rm --name addons-386274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386274 --entrypoint /usr/bin/test -v addons-386274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (2.150626031s)
	I1109 21:28:38.772327  714076 oci.go:107] Successfully prepared a docker volume addons-386274
	I1109 21:28:38.772348  714076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:38.772368  714076 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 21:28:38.772459  714076 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-386274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 21:28:43.070173  714076 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-386274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.297675494s)
	I1109 21:28:43.070204  714076 kic.go:203] duration metric: took 4.297833 seconds to extract preloaded images to volume
	W1109 21:28:43.070366  714076 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 21:28:43.070483  714076 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 21:28:43.140140  714076 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-386274 --name addons-386274 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-386274 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-386274 --network addons-386274 --ip 192.168.49.2 --volume addons-386274:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1109 21:28:43.484435  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Running}}
	I1109 21:28:43.516717  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:28:43.543137  714076 cli_runner.go:164] Run: docker exec addons-386274 stat /var/lib/dpkg/alternatives/iptables
	I1109 21:28:43.628611  714076 oci.go:144] the created container "addons-386274" has a running status.
	I1109 21:28:43.628640  714076 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa...
	I1109 21:28:43.928858  714076 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 21:28:43.951177  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:28:43.974298  714076 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 21:28:43.974380  714076 kic_runner.go:114] Args: [docker exec --privileged addons-386274 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 21:28:44.061943  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:28:44.105275  714076 machine.go:88] provisioning docker machine ...
	I1109 21:28:44.105307  714076 ubuntu.go:169] provisioning hostname "addons-386274"
	I1109 21:28:44.105379  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:44.135003  714076 main.go:141] libmachine: Using SSH client type: native
	I1109 21:28:44.135422  714076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33675 <nil> <nil>}
	I1109 21:28:44.135443  714076 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-386274 && echo "addons-386274" | sudo tee /etc/hostname
	I1109 21:28:44.136091  714076 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 21:28:47.293899  714076 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-386274
	
	I1109 21:28:47.293996  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:47.312578  714076 main.go:141] libmachine: Using SSH client type: native
	I1109 21:28:47.312986  714076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33675 <nil> <nil>}
	I1109 21:28:47.313010  714076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-386274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-386274/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-386274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 21:28:47.455398  714076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 21:28:47.455470  714076 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 21:28:47.455546  714076 ubuntu.go:177] setting up certificates
	I1109 21:28:47.455591  714076 provision.go:83] configureAuth start
	I1109 21:28:47.455701  714076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386274
	I1109 21:28:47.476796  714076 provision.go:138] copyHostCerts
	I1109 21:28:47.476870  714076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 21:28:47.477013  714076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 21:28:47.477091  714076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 21:28:47.477158  714076 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.addons-386274 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-386274]
	I1109 21:28:47.887172  714076 provision.go:172] copyRemoteCerts
	I1109 21:28:47.887240  714076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 21:28:47.887285  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:47.906683  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:28:48.005642  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1109 21:28:48.035636  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 21:28:48.064756  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 21:28:48.094462  714076 provision.go:86] duration metric: configureAuth took 638.843018ms
	I1109 21:28:48.094493  714076 ubuntu.go:193] setting minikube options for container-runtime
	I1109 21:28:48.094677  714076 config.go:182] Loaded profile config "addons-386274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:28:48.094785  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:48.112322  714076 main.go:141] libmachine: Using SSH client type: native
	I1109 21:28:48.112753  714076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33675 <nil> <nil>}
	I1109 21:28:48.112778  714076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 21:28:48.363694  714076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 21:28:48.363722  714076 machine.go:91] provisioned docker machine in 4.258428766s
	I1109 21:28:48.363732  714076 client.go:171] LocalClient.Create took 12.539285947s
	I1109 21:28:48.363748  714076 start.go:167] duration metric: libmachine.API.Create for "addons-386274" took 12.539374274s
	I1109 21:28:48.363756  714076 start.go:300] post-start starting for "addons-386274" (driver="docker")
	I1109 21:28:48.363765  714076 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 21:28:48.363842  714076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 21:28:48.363894  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:48.380976  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:28:48.481109  714076 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 21:28:48.485277  714076 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 21:28:48.485323  714076 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 21:28:48.485336  714076 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 21:28:48.485343  714076 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 21:28:48.485358  714076 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 21:28:48.485425  714076 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 21:28:48.485466  714076 start.go:303] post-start completed in 121.704397ms
	I1109 21:28:48.485784  714076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386274
	I1109 21:28:48.502986  714076 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/config.json ...
	I1109 21:28:48.503270  714076 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 21:28:48.503321  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:48.524267  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:28:48.620322  714076 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 21:28:48.625685  714076 start.go:128] duration metric: createHost completed in 12.803826676s
	I1109 21:28:48.625746  714076 start.go:83] releasing machines lock for "addons-386274", held for 12.804011971s
	I1109 21:28:48.625837  714076 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-386274
	I1109 21:28:48.643866  714076 ssh_runner.go:195] Run: cat /version.json
	I1109 21:28:48.643915  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:48.643933  714076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 21:28:48.644005  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:28:48.664438  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:28:48.675617  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:28:48.891716  714076 ssh_runner.go:195] Run: systemctl --version
	I1109 21:28:48.897483  714076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 21:28:49.042625  714076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 21:28:49.048543  714076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:28:49.074893  714076 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 21:28:49.074972  714076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:28:49.113179  714076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1109 21:28:49.113202  714076 start.go:472] detecting cgroup driver to use...
	I1109 21:28:49.113251  714076 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 21:28:49.113326  714076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 21:28:49.131339  714076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 21:28:49.144577  714076 docker.go:203] disabling cri-docker service (if available) ...
	I1109 21:28:49.144665  714076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 21:28:49.160408  714076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 21:28:49.176562  714076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 21:28:49.280332  714076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 21:28:49.397467  714076 docker.go:219] disabling docker service ...
	I1109 21:28:49.397552  714076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 21:28:49.420056  714076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 21:28:49.433640  714076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 21:28:49.538285  714076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 21:28:49.647439  714076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 21:28:49.660418  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 21:28:49.678918  714076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 21:28:49.678982  714076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:28:49.690848  714076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 21:28:49.690916  714076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:28:49.702170  714076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:28:49.713595  714076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:28:49.724657  714076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 21:28:49.735370  714076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 21:28:49.744905  714076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 21:28:49.754738  714076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 21:28:49.848608  714076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 21:28:49.973766  714076 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 21:28:49.973907  714076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 21:28:49.978397  714076 start.go:540] Will wait 60s for crictl version
	I1109 21:28:49.978454  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:28:49.982429  714076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 21:28:50.028794  714076 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 21:28:50.028915  714076 ssh_runner.go:195] Run: crio --version
	I1109 21:28:50.075844  714076 ssh_runner.go:195] Run: crio --version
	I1109 21:28:50.124133  714076 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1109 21:28:50.125972  714076 cli_runner.go:164] Run: docker network inspect addons-386274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:28:50.142611  714076 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 21:28:50.147158  714076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:28:50.160394  714076 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:50.160475  714076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:28:50.229253  714076 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 21:28:50.229276  714076 crio.go:415] Images already preloaded, skipping extraction
	I1109 21:28:50.229333  714076 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:28:50.271122  714076 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 21:28:50.271144  714076 cache_images.go:84] Images are preloaded, skipping loading
	I1109 21:28:50.271213  714076 ssh_runner.go:195] Run: crio config
	I1109 21:28:50.337539  714076 cni.go:84] Creating CNI manager for ""
	I1109 21:28:50.337563  714076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:28:50.337613  714076 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 21:28:50.337639  714076 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-386274 NodeName:addons-386274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 21:28:50.337849  714076 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-386274"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 21:28:50.337977  714076 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-386274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-386274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 21:28:50.338094  714076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1109 21:28:50.348456  714076 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 21:28:50.348557  714076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 21:28:50.359034  714076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1109 21:28:50.379200  714076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 21:28:50.399613  714076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1109 21:28:50.419841  714076 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 21:28:50.424128  714076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:28:50.436642  714076 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274 for IP: 192.168.49.2
	I1109 21:28:50.436675  714076 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:50.436790  714076 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 21:28:50.991162  714076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt ...
	I1109 21:28:50.991197  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt: {Name:mk294bccc9e5b93286a365a19c984c91d3f5e514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:50.991407  714076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key ...
	I1109 21:28:50.991421  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key: {Name:mk59c8f5f4140b06d6991d3c93d16967e1ab456e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:50.991508  714076 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 21:28:51.267752  714076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt ...
	I1109 21:28:51.267782  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt: {Name:mkae7330fd31b098a79d3377a0dc6e842c5f2249 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:51.267960  714076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key ...
	I1109 21:28:51.267972  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key: {Name:mk2650d964d7d4367748425ecaaf40fc5343c075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:51.268085  714076 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.key
	I1109 21:28:51.268101  714076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt with IP's: []
	I1109 21:28:51.748287  714076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt ...
	I1109 21:28:51.748320  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: {Name:mk7d9618c489d2077858c9fa90487c7b42c1cc2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:51.748500  714076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.key ...
	I1109 21:28:51.748513  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.key: {Name:mk4eea51ddfe44f179d443241c29f98c8d49f087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:51.748596  714076 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key.dd3b5fb2
	I1109 21:28:51.748616  714076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 21:28:52.523768  714076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt.dd3b5fb2 ...
	I1109 21:28:52.523805  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt.dd3b5fb2: {Name:mk72b0110ca757f5b26bd741da5529ffab49b7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:52.523994  714076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key.dd3b5fb2 ...
	I1109 21:28:52.524010  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key.dd3b5fb2: {Name:mkbc770a92df829a55105082ce8294f202605e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:52.524094  714076 certs.go:337] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt
	I1109 21:28:52.524165  714076 certs.go:341] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key
	I1109 21:28:52.524214  714076 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.key
	I1109 21:28:52.524234  714076 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.crt with IP's: []
	I1109 21:28:53.013185  714076 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.crt ...
	I1109 21:28:53.013216  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.crt: {Name:mka777a07a1cd000a3a9adc728c705dc26c625ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:53.014050  714076 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.key ...
	I1109 21:28:53.014067  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.key: {Name:mk82c1eb0d9867d2bad91e85bba47568e68897e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:28:53.014845  714076 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 21:28:53.014890  714076 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 21:28:53.014924  714076 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 21:28:53.014953  714076 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 21:28:53.015537  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 21:28:53.043950  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 21:28:53.072401  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 21:28:53.099860  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 21:28:53.127313  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 21:28:53.154696  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 21:28:53.182615  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 21:28:53.210118  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 21:28:53.237615  714076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 21:28:53.266522  714076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 21:28:53.287554  714076 ssh_runner.go:195] Run: openssl version
	I1109 21:28:53.294660  714076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 21:28:53.306132  714076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:28:53.310490  714076 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:28:53.310584  714076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:28:53.318807  714076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 21:28:53.329856  714076 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 21:28:53.334092  714076 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 21:28:53.334141  714076 kubeadm.go:404] StartCluster: {Name:addons-386274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-386274 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:28:53.334215  714076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 21:28:53.334284  714076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:28:53.376322  714076 cri.go:89] found id: ""
	I1109 21:28:53.376435  714076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 21:28:53.387359  714076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 21:28:53.398217  714076 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1109 21:28:53.398305  714076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 21:28:53.409206  714076 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 21:28:53.409283  714076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 21:28:53.460890  714076 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1109 21:28:53.461390  714076 kubeadm.go:322] [preflight] Running pre-flight checks
	I1109 21:28:53.509376  714076 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1109 21:28:53.509483  714076 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1109 21:28:53.509542  714076 kubeadm.go:322] OS: Linux
	I1109 21:28:53.509605  714076 kubeadm.go:322] CGROUPS_CPU: enabled
	I1109 21:28:53.509674  714076 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1109 21:28:53.509741  714076 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1109 21:28:53.509804  714076 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1109 21:28:53.509858  714076 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1109 21:28:53.509911  714076 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1109 21:28:53.509961  714076 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1109 21:28:53.510013  714076 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1109 21:28:53.510063  714076 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1109 21:28:53.591325  714076 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 21:28:53.591448  714076 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 21:28:53.591547  714076 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 21:28:53.857286  714076 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 21:28:53.860890  714076 out.go:204]   - Generating certificates and keys ...
	I1109 21:28:53.860975  714076 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1109 21:28:53.861197  714076 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1109 21:28:54.124709  714076 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 21:28:54.420039  714076 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1109 21:28:54.694661  714076 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1109 21:28:55.195304  714076 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1109 21:28:55.574873  714076 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1109 21:28:55.575247  714076 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-386274 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:28:55.734935  714076 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1109 21:28:55.735320  714076 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-386274 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:28:56.920830  714076 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 21:28:57.803832  714076 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 21:28:58.405710  714076 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1109 21:28:58.406068  714076 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 21:29:00.510612  714076 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 21:29:01.721319  714076 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 21:29:02.132916  714076 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 21:29:03.009814  714076 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 21:29:03.010534  714076 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 21:29:03.013163  714076 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 21:29:03.016456  714076 out.go:204]   - Booting up control plane ...
	I1109 21:29:03.016622  714076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 21:29:03.016700  714076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 21:29:03.017718  714076 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 21:29:03.028458  714076 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 21:29:03.029426  714076 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 21:29:03.029705  714076 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1109 21:29:03.140215  714076 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 21:29:10.642260  714076 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502111 seconds
	I1109 21:29:10.642391  714076 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 21:29:10.656408  714076 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 21:29:11.178928  714076 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 21:29:11.179121  714076 kubeadm.go:322] [mark-control-plane] Marking the node addons-386274 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 21:29:11.691532  714076 kubeadm.go:322] [bootstrap-token] Using token: p0ujue.es7kafklarjde9gq
	I1109 21:29:11.693476  714076 out.go:204]   - Configuring RBAC rules ...
	I1109 21:29:11.693592  714076 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 21:29:11.698455  714076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 21:29:11.708077  714076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 21:29:11.712039  714076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 21:29:11.715643  714076 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 21:29:11.719677  714076 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 21:29:11.737527  714076 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 21:29:11.982342  714076 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1109 21:29:12.110970  714076 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1109 21:29:12.112138  714076 kubeadm.go:322] 
	I1109 21:29:12.112205  714076 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1109 21:29:12.112215  714076 kubeadm.go:322] 
	I1109 21:29:12.112287  714076 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1109 21:29:12.112292  714076 kubeadm.go:322] 
	I1109 21:29:12.112316  714076 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1109 21:29:12.112370  714076 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 21:29:12.112418  714076 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 21:29:12.112422  714076 kubeadm.go:322] 
	I1109 21:29:12.112473  714076 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1109 21:29:12.112478  714076 kubeadm.go:322] 
	I1109 21:29:12.112522  714076 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 21:29:12.112527  714076 kubeadm.go:322] 
	I1109 21:29:12.112576  714076 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1109 21:29:12.112647  714076 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 21:29:12.112711  714076 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 21:29:12.112715  714076 kubeadm.go:322] 
	I1109 21:29:12.112794  714076 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 21:29:12.112866  714076 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1109 21:29:12.112871  714076 kubeadm.go:322] 
	I1109 21:29:12.112949  714076 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p0ujue.es7kafklarjde9gq \
	I1109 21:29:12.113047  714076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 \
	I1109 21:29:12.113068  714076 kubeadm.go:322] 	--control-plane 
	I1109 21:29:12.113073  714076 kubeadm.go:322] 
	I1109 21:29:12.113152  714076 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1109 21:29:12.113158  714076 kubeadm.go:322] 
	I1109 21:29:12.113235  714076 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p0ujue.es7kafklarjde9gq \
	I1109 21:29:12.113330  714076 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 21:29:12.115793  714076 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 21:29:12.115903  714076 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 21:29:12.115919  714076 cni.go:84] Creating CNI manager for ""
	I1109 21:29:12.115926  714076 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:29:12.119593  714076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 21:29:12.121819  714076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 21:29:12.133659  714076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1109 21:29:12.133678  714076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 21:29:12.163989  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 21:29:12.998444  714076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 21:29:12.998524  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:12.998568  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b minikube.k8s.io/name=addons-386274 minikube.k8s.io/updated_at=2023_11_09T21_29_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:13.167404  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:13.167464  714076 ops.go:34] apiserver oom_adj: -16
	I1109 21:29:13.283440  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:13.880078  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:14.379606  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:14.879592  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:15.379588  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:15.880282  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:16.380549  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:16.880359  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:17.380170  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:17.879664  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:18.379765  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:18.880091  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:19.380128  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:19.880170  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:20.380567  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:20.879680  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:21.380158  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:21.879764  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:22.379645  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:22.880095  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:23.379654  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:23.880243  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:24.379695  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:24.879829  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:25.380304  714076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:29:25.475031  714076 kubeadm.go:1081] duration metric: took 12.476566792s to wait for elevateKubeSystemPrivileges.
	I1109 21:29:25.475059  714076 kubeadm.go:406] StartCluster complete in 32.140921864s
	I1109 21:29:25.475078  714076 settings.go:142] acquiring lock: {Name:mk717b4baf2280543b738622644195ea0d60d476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:29:25.475195  714076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:29:25.475607  714076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/kubeconfig: {Name:mk5701fd19491b0b49f183ef877286e38ea5f8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:29:25.475789  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 21:29:25.476062  714076 config.go:182] Loaded profile config "addons-386274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:29:25.476215  714076 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1109 21:29:25.476308  714076 addons.go:69] Setting volumesnapshots=true in profile "addons-386274"
	I1109 21:29:25.476323  714076 addons.go:231] Setting addon volumesnapshots=true in "addons-386274"
	I1109 21:29:25.476359  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.476821  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.477293  714076 addons.go:69] Setting cloud-spanner=true in profile "addons-386274"
	I1109 21:29:25.477312  714076 addons.go:231] Setting addon cloud-spanner=true in "addons-386274"
	I1109 21:29:25.477344  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.477748  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.478151  714076 addons.go:69] Setting metrics-server=true in profile "addons-386274"
	I1109 21:29:25.478181  714076 addons.go:231] Setting addon metrics-server=true in "addons-386274"
	I1109 21:29:25.478212  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.478667  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.480169  714076 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-386274"
	I1109 21:29:25.480190  714076 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-386274"
	I1109 21:29:25.480232  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.480624  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.483034  714076 addons.go:69] Setting registry=true in profile "addons-386274"
	I1109 21:29:25.483067  714076 addons.go:231] Setting addon registry=true in "addons-386274"
	I1109 21:29:25.483120  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.483564  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.490185  714076 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-386274"
	I1109 21:29:25.490291  714076 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-386274"
	I1109 21:29:25.490424  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.490914  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.506190  714076 addons.go:69] Setting storage-provisioner=true in profile "addons-386274"
	I1109 21:29:25.508899  714076 addons.go:231] Setting addon storage-provisioner=true in "addons-386274"
	I1109 21:29:25.508979  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.509467  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.534146  714076 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-386274"
	I1109 21:29:25.534229  714076 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-386274"
	I1109 21:29:25.537497  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.506246  714076 addons.go:69] Setting default-storageclass=true in profile "addons-386274"
	I1109 21:29:25.572907  714076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-386274"
	I1109 21:29:25.573287  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.603317  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 21:29:25.506266  714076 addons.go:69] Setting ingress=true in profile "addons-386274"
	I1109 21:29:25.506273  714076 addons.go:69] Setting ingress-dns=true in profile "addons-386274"
	I1109 21:29:25.506280  714076 addons.go:69] Setting inspektor-gadget=true in profile "addons-386274"
	I1109 21:29:25.506256  714076 addons.go:69] Setting gcp-auth=true in profile "addons-386274"
	I1109 21:29:25.605786  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 21:29:25.605796  714076 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1109 21:29:25.605818  714076 addons.go:231] Setting addon ingress=true in "addons-386274"
	I1109 21:29:25.605827  714076 addons.go:231] Setting addon ingress-dns=true in "addons-386274"
	I1109 21:29:25.605839  714076 addons.go:231] Setting addon inspektor-gadget=true in "addons-386274"
	I1109 21:29:25.611313  714076 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1109 21:29:25.608087  714076 out.go:177]   - Using image docker.io/registry:2.8.3
	I1109 21:29:25.608094  714076 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1109 21:29:25.608111  714076 mustload.go:65] Loading cluster: addons-386274
	I1109 21:29:25.608121  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 21:29:25.608186  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.608210  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.608229  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.611772  714076 config.go:182] Loaded profile config "addons-386274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:29:25.611830  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.614817  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.614861  714076 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 21:29:25.615234  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.615564  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.617660  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.654801  714076 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1109 21:29:25.660772  714076 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 21:29:25.660821  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1109 21:29:25.660890  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.628930  714076 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 21:29:25.674748  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 21:29:25.674826  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.628978  714076 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1109 21:29:25.629407  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 21:29:25.702382  714076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:29:25.705390  714076 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:29:25.705454  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 21:29:25.705550  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.720025  714076 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-386274"
	I1109 21:29:25.720120  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.721006  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.702586  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 21:29:25.702601  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 21:29:25.728546  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 21:29:25.722066  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.702680  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.747439  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 21:29:25.797156  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 21:29:25.805242  714076 addons.go:231] Setting addon default-storageclass=true in "addons-386274"
	I1109 21:29:25.810838  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.811376  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:25.816777  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 21:29:25.819231  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 21:29:25.822573  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 21:29:25.829161  714076 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 21:29:25.831751  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 21:29:25.831798  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 21:29:25.831893  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.805587  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 21:29:25.805742  714076 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-386274" context rescaled to 1 replicas
	I1109 21:29:25.848570  714076 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:29:25.851229  714076 out.go:177] * Verifying Kubernetes components...
	I1109 21:29:25.854056  714076 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1109 21:29:25.858971  714076 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1109 21:29:25.858989  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1109 21:29:25.859054  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.858445  714076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:29:25.891045  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:25.894600  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:25.895215  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:25.895978  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:25.902513  714076 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1109 21:29:25.904898  714076 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 21:29:25.904942  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1109 21:29:25.905029  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.920450  714076 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1109 21:29:25.926525  714076 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1109 21:29:25.934945  714076 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1109 21:29:25.937598  714076 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 21:29:25.937619  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1109 21:29:25.937687  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.952438  714076 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 21:29:25.954585  714076 out.go:177]   - Using image docker.io/busybox:stable
	I1109 21:29:25.957029  714076 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 21:29:25.957049  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 21:29:25.957117  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:25.998403  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.026132  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.027322  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.036323  714076 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 21:29:26.036344  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 21:29:26.036415  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:26.048999  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.078611  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.145210  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.145741  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.151553  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.165451  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:26.363001  714076 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 21:29:26.363025  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 21:29:26.412608  714076 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 21:29:26.412643  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 21:29:26.426210  714076 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 21:29:26.426235  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 21:29:26.432791  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 21:29:26.461357  714076 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 21:29:26.461384  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 21:29:26.490620  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 21:29:26.490646  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 21:29:26.502882  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 21:29:26.540691  714076 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 21:29:26.540715  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 21:29:26.560343  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:29:26.562994  714076 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 21:29:26.563018  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 21:29:26.577423  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 21:29:26.577449  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 21:29:26.592201  714076 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1109 21:29:26.592226  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1109 21:29:26.654914  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 21:29:26.673351  714076 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 21:29:26.673377  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 21:29:26.685638  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 21:29:26.688589  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 21:29:26.691438  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 21:29:26.693713  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 21:29:26.695908  714076 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 21:29:26.695929  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 21:29:26.698357  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 21:29:26.698377  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 21:29:26.783274  714076 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1109 21:29:26.783300  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1109 21:29:26.803822  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 21:29:26.803849  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 21:29:26.851583  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 21:29:26.851610  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 21:29:26.878416  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 21:29:26.958307  714076 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 21:29:26.958364  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 21:29:27.007492  714076 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1109 21:29:27.007519  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1109 21:29:27.043099  714076 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 21:29:27.043122  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 21:29:27.127360  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 21:29:27.127385  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 21:29:27.183429  714076 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1109 21:29:27.183454  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1109 21:29:27.241711  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 21:29:27.270941  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 21:29:27.270969  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 21:29:27.322623  714076 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1109 21:29:27.322648  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1109 21:29:27.352988  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 21:29:27.353013  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 21:29:27.472649  714076 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1109 21:29:27.472680  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1109 21:29:27.480019  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 21:29:27.480044  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 21:29:27.548668  714076 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 21:29:27.548696  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1109 21:29:27.570039  714076 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1109 21:29:27.570063  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1109 21:29:27.611287  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 21:29:27.635541  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1109 21:29:28.498678  714076 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.650440121s)
	I1109 21:29:28.498708  714076 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 21:29:28.498772  714076 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.608116661s)
	I1109 21:29:28.499576  714076 node_ready.go:35] waiting up to 6m0s for node "addons-386274" to be "Ready" ...
	I1109 21:29:29.815187  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.382357931s)
	I1109 21:29:30.194058  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.691138447s)
	I1109 21:29:30.875808  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:30.917182  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.356801336s)
	I1109 21:29:31.152822  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.497816367s)
	I1109 21:29:31.750349  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.064654995s)
	I1109 21:29:31.750470  714076 addons.go:467] Verifying addon ingress=true in "addons-386274"
	I1109 21:29:31.752916  714076 out.go:177] * Verifying ingress addon...
	I1109 21:29:31.750639  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.06202537s)
	I1109 21:29:31.750775  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.059205059s)
	I1109 21:29:31.750803  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.057067573s)
	I1109 21:29:31.750865  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.872420998s)
	I1109 21:29:31.750970  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.509232164s)
	I1109 21:29:31.753105  714076 addons.go:467] Verifying addon registry=true in "addons-386274"
	I1109 21:29:31.753388  714076 addons.go:467] Verifying addon metrics-server=true in "addons-386274"
	W1109 21:29:31.753521  714076 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 21:29:31.756375  714076 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 21:29:31.757829  714076 out.go:177] * Verifying registry addon...
	I1109 21:29:31.758011  714076 retry.go:31] will retry after 320.731829ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 21:29:31.762076  714076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 21:29:31.773137  714076 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 21:29:31.773204  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:31.778286  714076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 21:29:31.778395  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:31.784390  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:31.785195  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:32.022956  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.411619563s)
	I1109 21:29:32.022992  714076 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-386274"
	I1109 21:29:32.025870  714076 out.go:177] * Verifying csi-hostpath-driver addon...
	I1109 21:29:32.023323  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.38774856s)
	I1109 21:29:32.030277  714076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 21:29:32.046237  714076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 21:29:32.046271  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:32.055212  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:32.082517  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 21:29:32.323041  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:32.323933  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:32.560710  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:32.815493  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:32.816293  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:33.059418  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:33.290517  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:33.291532  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:33.376212  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:33.602756  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:33.818014  714076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.735450555s)
	I1109 21:29:33.822249  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:33.829783  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:34.060349  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:34.289638  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:34.291570  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:34.559837  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:34.800026  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:34.800994  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:35.077227  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:35.193142  714076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 21:29:35.193253  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:35.233773  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:35.293095  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:35.294012  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:35.386744  714076 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 21:29:35.414870  714076 addons.go:231] Setting addon gcp-auth=true in "addons-386274"
	I1109 21:29:35.414956  714076 host.go:66] Checking if "addons-386274" exists ...
	I1109 21:29:35.415432  714076 cli_runner.go:164] Run: docker container inspect addons-386274 --format={{.State.Status}}
	I1109 21:29:35.434961  714076 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 21:29:35.435013  714076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-386274
	I1109 21:29:35.456512  714076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33675 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/addons-386274/id_rsa Username:docker}
	I1109 21:29:35.562881  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:35.587581  714076 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1109 21:29:35.590222  714076 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1109 21:29:35.591875  714076 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 21:29:35.591899  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 21:29:35.623576  714076 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 21:29:35.623603  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 21:29:35.654884  714076 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 21:29:35.654909  714076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1109 21:29:35.681314  714076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 21:29:35.788791  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:35.800362  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:35.844395  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:36.060784  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:36.299620  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:36.300625  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:36.491787  714076 addons.go:467] Verifying addon gcp-auth=true in "addons-386274"
	I1109 21:29:36.494434  714076 out.go:177] * Verifying gcp-auth addon...
	I1109 21:29:36.497355  714076 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 21:29:36.512207  714076 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 21:29:36.512231  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:36.524637  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:36.560892  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:36.790030  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:36.792363  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:37.029340  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:37.060909  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:37.292802  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:37.293701  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:37.529625  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:37.560882  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:37.788875  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:37.790433  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:38.028729  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:38.061519  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:38.288806  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:38.290749  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:38.343146  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:38.528747  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:38.559981  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:38.789238  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:38.790275  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:39.028561  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:39.059923  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:39.291740  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:39.292476  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:39.529284  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:39.559604  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:39.789399  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:39.790146  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:40.028373  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:40.060273  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:40.289582  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:40.290635  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:40.344308  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:40.528759  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:40.560076  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:40.789629  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:40.790296  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:41.028379  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:41.060160  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:41.288842  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:41.289912  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:41.528971  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:41.560580  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:41.789497  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:41.789973  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:42.028929  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:42.060898  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:42.291553  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:42.291665  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:42.528585  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:42.559998  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:42.790234  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:42.790962  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:42.843249  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:43.028474  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:43.060434  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:43.289042  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:43.291267  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:43.529054  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:43.559910  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:43.789627  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:43.789948  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:44.028792  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:44.059705  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:44.289790  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:44.290585  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:44.528422  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:44.560304  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:44.788931  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:44.789645  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:45.028622  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:45.060482  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:45.290503  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:45.291038  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:45.343217  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:45.528854  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:45.559776  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:45.789277  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:45.790394  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:46.028881  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:46.060442  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:46.288783  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:46.290007  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:46.529517  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:46.561208  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:46.788919  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:46.790140  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:47.028129  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:47.059930  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:47.289698  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:47.290595  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:47.343255  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:47.528794  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:47.559692  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:47.789074  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:47.789823  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:48.028283  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:48.059916  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:48.290187  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:48.290299  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:48.529126  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:48.564056  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:48.792005  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:48.793592  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:49.028213  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:49.059526  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:49.288909  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:49.289986  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:49.343334  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:49.529007  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:49.559911  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:49.790898  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:49.791375  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:50.028948  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:50.059922  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:50.289389  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:50.290171  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:50.528166  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:50.559939  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:50.789720  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:50.790534  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:51.028598  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:51.060186  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:51.288742  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:51.291153  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:51.343710  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:51.528349  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:51.560131  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:51.789868  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:51.789529  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:52.028892  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:52.060632  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:52.290379  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:52.291190  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:52.528759  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:52.560119  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:52.789631  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:52.791044  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:53.028897  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:53.060532  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:53.289388  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:53.290428  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:53.528579  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:53.559403  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:53.789354  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:53.790545  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:53.843054  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:54.028286  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:54.060212  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:54.288466  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:54.289596  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:54.529847  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:54.559651  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:54.789372  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:54.790016  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:55.029271  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:55.060235  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:55.289060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:55.291291  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:55.528790  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:55.559564  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:55.789027  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:55.791530  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:55.843177  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:56.028244  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:56.060154  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:56.289800  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:56.290396  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:56.528166  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:56.560656  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:56.789771  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:56.790111  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:57.028511  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:57.061076  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:57.288897  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:57.290224  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:57.529214  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:57.560176  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:57.788187  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:57.789614  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:57.843253  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:29:58.028972  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:58.060184  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:58.289892  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:58.290602  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:58.528035  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:58.559814  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:58.791590  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:58.792230  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:59.029466  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:59.060579  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:59.288959  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:59.290626  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:59.529187  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:29:59.560007  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:29:59.790035  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:29:59.790636  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:29:59.844268  714076 node_ready.go:58] node "addons-386274" has status "Ready":"False"
	I1109 21:30:00.041705  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:00.088109  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:00.298496  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:00.300049  714076 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 21:30:00.300077  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:00.357454  714076 node_ready.go:49] node "addons-386274" has status "Ready":"True"
	I1109 21:30:00.357483  714076 node_ready.go:38] duration metric: took 31.85787935s waiting for node "addons-386274" to be "Ready" ...
	I1109 21:30:00.357494  714076 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:30:00.399947  714076 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n2wnr" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:00.625596  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:00.630693  714076 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 21:30:00.630855  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:00.808555  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:00.821550  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:01.036791  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:01.068657  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:01.290235  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:01.291853  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:01.534414  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:01.570662  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:01.649532  714076 pod_ready.go:92] pod "coredns-5dd5756b68-n2wnr" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:01.649561  714076 pod_ready.go:81] duration metric: took 1.249566402s waiting for pod "coredns-5dd5756b68-n2wnr" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.649585  714076 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.656291  714076 pod_ready.go:92] pod "etcd-addons-386274" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:01.656318  714076 pod_ready.go:81] duration metric: took 6.725308ms waiting for pod "etcd-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.656333  714076 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.667285  714076 pod_ready.go:92] pod "kube-apiserver-addons-386274" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:01.667313  714076 pod_ready.go:81] duration metric: took 10.971128ms waiting for pod "kube-apiserver-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.667325  714076 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.675964  714076 pod_ready.go:92] pod "kube-controller-manager-addons-386274" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:01.675992  714076 pod_ready.go:81] duration metric: took 8.658242ms waiting for pod "kube-controller-manager-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.676011  714076 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qrdzs" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.790703  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:01.791723  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:01.957539  714076 pod_ready.go:92] pod "kube-proxy-qrdzs" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:01.957564  714076 pod_ready.go:81] duration metric: took 281.545711ms waiting for pod "kube-proxy-qrdzs" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:01.957578  714076 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:02.032433  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:02.062628  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:02.290553  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:02.294189  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:02.344265  714076 pod_ready.go:92] pod "kube-scheduler-addons-386274" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:02.344289  714076 pod_ready.go:81] duration metric: took 386.703157ms waiting for pod "kube-scheduler-addons-386274" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:02.344301  714076 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:02.529084  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:02.561872  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:02.791250  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:02.793719  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:03.029765  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:03.064358  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:03.289706  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:03.291196  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:03.529140  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:03.562018  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:03.791209  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:03.791964  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:04.029519  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:04.061060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:04.289996  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:04.292416  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:04.529060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:04.561516  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:04.653792  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:04.790225  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:04.791065  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:05.028948  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:05.065231  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:05.291702  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:05.294265  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:05.529159  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:05.562409  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:05.791220  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:05.792021  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:06.028396  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:06.062396  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:06.292918  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:06.295175  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:06.529583  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:06.562598  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:06.657777  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:06.791355  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:06.792315  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:07.029424  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:07.062496  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:07.289980  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:07.291268  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:07.529088  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:07.561907  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:07.789626  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:07.791179  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:08.028771  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:08.061397  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:08.290229  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:08.291757  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:08.530763  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:08.569479  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:08.793286  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:08.796271  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:09.028440  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:09.061487  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:09.154548  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:09.293272  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:09.294205  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:09.529769  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:09.562617  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:09.790504  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:09.792814  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:10.029018  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:10.061619  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:10.289696  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:10.290713  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:10.529569  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:10.565074  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:10.792607  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:10.793146  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:11.029182  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:11.061807  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:11.291839  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:11.293518  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:11.528919  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:11.565164  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:11.655835  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:11.792442  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:11.793660  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:12.029796  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:12.063127  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:12.293588  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:12.295168  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:12.529527  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:12.565060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:12.793653  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:12.795431  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:13.028569  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:13.063204  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:13.294106  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:13.298093  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:13.529579  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:13.562694  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:13.790275  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:13.791485  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:14.028203  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:14.062945  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:14.155536  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:14.297773  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:14.307780  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:14.529000  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:14.562349  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:14.791727  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:14.793187  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:15.028836  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:15.066650  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:15.297186  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:15.301024  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:15.529144  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:15.561275  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:15.789831  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:15.791033  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:16.030036  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:16.063883  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:16.158208  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:16.292917  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:16.294282  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:16.529404  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:16.562484  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:16.790675  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:16.790916  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:17.028113  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:17.061974  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:17.288991  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:17.291005  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:17.528691  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:17.561160  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:17.790793  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:17.791947  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:18.028748  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:18.061950  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:18.290864  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:18.291684  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:18.528442  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:18.561460  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:18.654221  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:18.789371  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:18.790914  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:19.028851  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:19.061213  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:19.290835  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:19.291784  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:19.529069  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:19.561321  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:19.789629  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:19.791569  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:20.029062  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:20.062628  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:20.298292  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:20.299843  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:20.528889  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:20.566041  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:20.655312  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:20.789798  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:20.791398  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:21.028543  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:21.061021  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:21.290534  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:21.290707  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:21.528161  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:21.561132  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:21.793093  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:21.794631  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:22.031341  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:22.062400  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:22.293173  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:22.294495  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:22.528776  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:22.560935  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:22.656963  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:22.790724  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:22.791115  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:23.028381  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:23.060906  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:23.292604  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:23.296017  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:23.530200  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:23.560577  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:23.799330  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:23.800208  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:24.030927  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:24.064363  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:24.290754  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:24.291523  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:24.529596  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:24.561982  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:24.789573  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:24.790776  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:25.029243  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:25.061065  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:25.154926  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:25.293631  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:25.294895  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:25.534857  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:25.569524  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:25.790752  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:25.791671  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:26.030206  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:26.062958  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:26.292537  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:26.301485  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:26.530517  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:26.563430  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:26.795626  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:26.801478  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:27.032733  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:27.064605  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:27.156991  714076 pod_ready.go:102] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:27.293298  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:27.295462  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:27.532751  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:27.563707  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:27.793676  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:27.794593  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:28.028857  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:28.064526  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:28.290837  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:28.292009  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:28.529688  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:28.562817  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:28.827328  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:28.837320  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:29.028968  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:29.069551  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:29.154100  714076 pod_ready.go:92] pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:29.154126  714076 pod_ready.go:81] duration metric: took 26.809818774s waiting for pod "metrics-server-7c66d45ddc-654wx" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:29.154139  714076 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9nwzl" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:29.290646  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:29.293452  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:29.528491  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:29.562425  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:29.792849  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:29.794791  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:30.032887  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:30.062236  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:30.297464  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:30.298465  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:30.529466  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:30.562657  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:30.796053  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:30.798769  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:31.029102  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:31.066903  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:31.173477  714076 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-9nwzl" in "kube-system" namespace has status "Ready":"False"
	I1109 21:30:31.292932  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:31.293975  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:31.532805  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:31.561656  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:31.791190  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:31.793009  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:32.029093  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:32.061383  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:32.289457  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:32.290876  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:32.528779  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:32.560968  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:32.791121  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:32.791640  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:33.034112  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:33.061553  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:33.174165  714076 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-9nwzl" in "kube-system" namespace has status "Ready":"True"
	I1109 21:30:33.174193  714076 pod_ready.go:81] duration metric: took 4.020017171s waiting for pod "nvidia-device-plugin-daemonset-9nwzl" in "kube-system" namespace to be "Ready" ...
	I1109 21:30:33.174240  714076 pod_ready.go:38] duration metric: took 32.816717849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:30:33.174262  714076 api_server.go:52] waiting for apiserver process to appear ...
	I1109 21:30:33.174288  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 21:30:33.174421  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 21:30:33.223088  714076 cri.go:89] found id: "5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:33.223113  714076 cri.go:89] found id: ""
	I1109 21:30:33.223127  714076 logs.go:284] 1 containers: [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867]
	I1109 21:30:33.223216  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:33.227907  714076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 21:30:33.227980  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 21:30:33.280060  714076 cri.go:89] found id: "de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:33.280081  714076 cri.go:89] found id: ""
	I1109 21:30:33.280090  714076 logs.go:284] 1 containers: [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f]
	I1109 21:30:33.280163  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:33.285015  714076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 21:30:33.285121  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 21:30:33.296047  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:33.299296  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:33.369268  714076 cri.go:89] found id: "17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:33.369332  714076 cri.go:89] found id: ""
	I1109 21:30:33.369355  714076 logs.go:284] 1 containers: [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4]
	I1109 21:30:33.369432  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:33.376415  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 21:30:33.376528  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 21:30:33.529296  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:33.538884  714076 cri.go:89] found id: "d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:33.538945  714076 cri.go:89] found id: ""
	I1109 21:30:33.538977  714076 logs.go:284] 1 containers: [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451]
	I1109 21:30:33.539053  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:33.555565  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 21:30:33.555685  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 21:30:33.562042  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:33.793828  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:33.795991  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:33.882792  714076 cri.go:89] found id: "aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:33.882853  714076 cri.go:89] found id: ""
	I1109 21:30:33.882876  714076 logs.go:284] 1 containers: [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09]
	I1109 21:30:33.882953  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:33.902605  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 21:30:33.902724  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 21:30:34.028415  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:34.073003  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:34.104582  714076 cri.go:89] found id: "32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:34.104647  714076 cri.go:89] found id: ""
	I1109 21:30:34.104677  714076 logs.go:284] 1 containers: [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794]
	I1109 21:30:34.104749  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:34.121459  714076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 21:30:34.121573  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 21:30:34.292829  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:34.293791  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:34.307121  714076 cri.go:89] found id: "d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:34.307188  714076 cri.go:89] found id: ""
	I1109 21:30:34.307210  714076 logs.go:284] 1 containers: [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0]
	I1109 21:30:34.307281  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:34.313117  714076 logs.go:123] Gathering logs for kubelet ...
	I1109 21:30:34.313155  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1109 21:30:34.389717  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181233    1365 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.389989  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181276    1365 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.390197  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181617    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.390449  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.405679  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.405923  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.406283  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:34.406558  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:34.427203  714076 logs.go:123] Gathering logs for dmesg ...
	I1109 21:30:34.427246  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 21:30:34.451338  714076 logs.go:123] Gathering logs for describe nodes ...
	I1109 21:30:34.451376  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 21:30:34.529317  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:34.562118  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:34.757699  714076 logs.go:123] Gathering logs for kube-apiserver [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867] ...
	I1109 21:30:34.757734  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:34.792197  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:34.793527  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:34.890790  714076 logs.go:123] Gathering logs for coredns [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4] ...
	I1109 21:30:34.890889  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:34.991151  714076 logs.go:123] Gathering logs for kube-scheduler [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451] ...
	I1109 21:30:34.991224  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:35.029247  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:35.062190  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:35.122969  714076 logs.go:123] Gathering logs for kindnet [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0] ...
	I1109 21:30:35.123039  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:35.185291  714076 logs.go:123] Gathering logs for etcd [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f] ...
	I1109 21:30:35.185316  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:35.273336  714076 logs.go:123] Gathering logs for kube-proxy [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09] ...
	I1109 21:30:35.273427  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:35.292536  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:35.293416  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:35.347509  714076 logs.go:123] Gathering logs for kube-controller-manager [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794] ...
	I1109 21:30:35.347594  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:35.473897  714076 logs.go:123] Gathering logs for CRI-O ...
	I1109 21:30:35.473932  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 21:30:35.528769  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:35.573396  714076 logs.go:123] Gathering logs for container status ...
	I1109 21:30:35.573428  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 21:30:35.576613  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:35.652677  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:35.652739  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1109 21:30:35.652809  714076 out.go:239] X Problems detected in kubelet:
	W1109 21:30:35.652853  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:35.657615  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:35.657658  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:35.657745  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:35.657782  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:35.657824  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:35.657851  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:30:35.796345  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:35.796665  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:36.029258  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:36.061680  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:36.290544  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:36.291181  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:36.529015  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:36.568248  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:36.792126  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:36.796578  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:37.032026  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:37.061746  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:37.292024  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 21:30:37.294979  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:37.528538  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:37.561355  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:37.789515  714076 kapi.go:107] duration metric: took 1m6.027434619s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 21:30:37.792051  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:38.029186  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:38.068273  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:38.289675  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:38.529010  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:38.563467  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:38.790161  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:39.029064  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:39.064081  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:39.290446  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:39.528647  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:39.562883  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:39.791788  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:40.029144  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:40.061136  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:40.289800  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:40.528497  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:40.562303  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:40.803795  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:41.028961  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:41.062494  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:41.290613  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:41.528060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:41.561039  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:41.806075  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:42.029116  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:42.062572  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:42.292515  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:42.528086  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:42.561899  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:42.795807  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:43.029296  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:43.062560  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:43.290112  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:43.528828  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:43.562554  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:43.789760  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:44.028830  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:44.062429  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:44.290212  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:44.530158  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:44.562842  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:44.790422  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:45.030249  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:45.065543  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:45.290258  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:45.529283  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:45.561334  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:45.659452  714076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:30:45.674236  714076 api_server.go:72] duration metric: took 1m19.825627989s to wait for apiserver process to appear ...
	I1109 21:30:45.674266  714076 api_server.go:88] waiting for apiserver healthz status ...
	I1109 21:30:45.674297  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 21:30:45.674384  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 21:30:45.718681  714076 cri.go:89] found id: "5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:45.718705  714076 cri.go:89] found id: ""
	I1109 21:30:45.718713  714076 logs.go:284] 1 containers: [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867]
	I1109 21:30:45.718776  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.723330  714076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 21:30:45.723406  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 21:30:45.768461  714076 cri.go:89] found id: "de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:45.768485  714076 cri.go:89] found id: ""
	I1109 21:30:45.768494  714076 logs.go:284] 1 containers: [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f]
	I1109 21:30:45.768548  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.773110  714076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 21:30:45.773183  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 21:30:45.792473  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:45.827346  714076 cri.go:89] found id: "17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:45.827369  714076 cri.go:89] found id: ""
	I1109 21:30:45.827378  714076 logs.go:284] 1 containers: [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4]
	I1109 21:30:45.827432  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.832092  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 21:30:45.832160  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 21:30:45.878684  714076 cri.go:89] found id: "d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:45.878707  714076 cri.go:89] found id: ""
	I1109 21:30:45.878715  714076 logs.go:284] 1 containers: [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451]
	I1109 21:30:45.878769  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.883199  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 21:30:45.883304  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 21:30:45.925378  714076 cri.go:89] found id: "aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:45.925401  714076 cri.go:89] found id: ""
	I1109 21:30:45.925409  714076 logs.go:284] 1 containers: [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09]
	I1109 21:30:45.925493  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.931540  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 21:30:45.931670  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 21:30:45.977122  714076 cri.go:89] found id: "32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:45.977144  714076 cri.go:89] found id: ""
	I1109 21:30:45.977152  714076 logs.go:284] 1 containers: [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794]
	I1109 21:30:45.977236  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:45.981802  714076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 21:30:45.981882  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 21:30:46.028434  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:46.033335  714076 cri.go:89] found id: "d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:46.033359  714076 cri.go:89] found id: ""
	I1109 21:30:46.033368  714076 logs.go:284] 1 containers: [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0]
	I1109 21:30:46.033424  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:46.037905  714076 logs.go:123] Gathering logs for CRI-O ...
	I1109 21:30:46.037929  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 21:30:46.064060  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:46.159632  714076 logs.go:123] Gathering logs for container status ...
	I1109 21:30:46.159671  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 21:30:46.285194  714076 logs.go:123] Gathering logs for dmesg ...
	I1109 21:30:46.285224  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 21:30:46.290650  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:46.327180  714076 logs.go:123] Gathering logs for describe nodes ...
	I1109 21:30:46.327210  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 21:30:46.542632  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:46.568368  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:46.575357  714076 logs.go:123] Gathering logs for kube-apiserver [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867] ...
	I1109 21:30:46.575387  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:46.748633  714076 logs.go:123] Gathering logs for etcd [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f] ...
	I1109 21:30:46.748675  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:46.795088  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:46.849431  714076 logs.go:123] Gathering logs for kindnet [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0] ...
	I1109 21:30:46.849473  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:46.941480  714076 logs.go:123] Gathering logs for kubelet ...
	I1109 21:30:46.941508  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1109 21:30:47.005023  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181233    1365 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.005251  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181276    1365 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.005458  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181617    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.005664  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.016233  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.016438  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.016759  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.016964  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:47.028420  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:47.042013  714076 logs.go:123] Gathering logs for coredns [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4] ...
	I1109 21:30:47.042042  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:47.062697  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:47.092832  714076 logs.go:123] Gathering logs for kube-scheduler [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451] ...
	I1109 21:30:47.092860  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:47.148592  714076 logs.go:123] Gathering logs for kube-proxy [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09] ...
	I1109 21:30:47.148622  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:47.197467  714076 logs.go:123] Gathering logs for kube-controller-manager [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794] ...
	I1109 21:30:47.197495  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:47.279989  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:47.280019  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1109 21:30:47.280076  714076 out.go:239] X Problems detected in kubelet:
	W1109 21:30:47.280087  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.280095  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.280111  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.280118  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:47.280129  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:47.280136  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:47.280142  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:30:47.293660  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:47.528766  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:47.560701  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:47.790399  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:48.029746  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:48.063523  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:48.291112  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:48.529269  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:48.569277  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:48.790661  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:49.029355  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:49.063176  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:49.292390  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:49.528795  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:49.560838  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:49.789988  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:50.029121  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:50.061333  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:50.290092  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:50.528623  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:50.562460  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:50.797157  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:51.028773  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:51.063028  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:51.290980  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:51.530540  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:51.561223  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 21:30:51.791058  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:52.028406  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:52.061255  714076 kapi.go:107] duration metric: took 1m20.030979198s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 21:30:52.289834  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:52.528561  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:52.789631  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:53.028371  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:53.289890  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:53.529508  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:53.790825  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:54.029330  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:54.290495  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:54.528034  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:54.790272  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:55.028370  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:55.290262  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:55.528289  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:55.790627  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:56.028955  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:56.290261  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:56.529066  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:56.789959  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:57.028883  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:57.280657  714076 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 21:30:57.289406  714076 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 21:30:57.291958  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:57.292520  714076 api_server.go:141] control plane version: v1.28.3
	I1109 21:30:57.292546  714076 api_server.go:131] duration metric: took 11.618272941s to wait for apiserver health ...
	I1109 21:30:57.292555  714076 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 21:30:57.292576  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1109 21:30:57.292648  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1109 21:30:57.339439  714076 cri.go:89] found id: "5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:57.339460  714076 cri.go:89] found id: ""
	I1109 21:30:57.339468  714076 logs.go:284] 1 containers: [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867]
	I1109 21:30:57.339540  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.344163  714076 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1109 21:30:57.344234  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1109 21:30:57.386901  714076 cri.go:89] found id: "de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:57.386925  714076 cri.go:89] found id: ""
	I1109 21:30:57.386934  714076 logs.go:284] 1 containers: [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f]
	I1109 21:30:57.386987  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.391960  714076 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1109 21:30:57.392063  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1109 21:30:57.438713  714076 cri.go:89] found id: "17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:57.438737  714076 cri.go:89] found id: ""
	I1109 21:30:57.438746  714076 logs.go:284] 1 containers: [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4]
	I1109 21:30:57.438799  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.443228  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1109 21:30:57.443333  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1109 21:30:57.494399  714076 cri.go:89] found id: "d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:57.494472  714076 cri.go:89] found id: ""
	I1109 21:30:57.494512  714076 logs.go:284] 1 containers: [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451]
	I1109 21:30:57.494588  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.500240  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1109 21:30:57.500313  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1109 21:30:57.529272  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:57.546383  714076 cri.go:89] found id: "aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:57.546420  714076 cri.go:89] found id: ""
	I1109 21:30:57.546429  714076 logs.go:284] 1 containers: [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09]
	I1109 21:30:57.546544  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.551333  714076 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1109 21:30:57.551410  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1109 21:30:57.594506  714076 cri.go:89] found id: "32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:57.594532  714076 cri.go:89] found id: ""
	I1109 21:30:57.594542  714076 logs.go:284] 1 containers: [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794]
	I1109 21:30:57.594607  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.599691  714076 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1109 21:30:57.599769  714076 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1109 21:30:57.648569  714076 cri.go:89] found id: "d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:57.648590  714076 cri.go:89] found id: ""
	I1109 21:30:57.648600  714076 logs.go:284] 1 containers: [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0]
	I1109 21:30:57.648676  714076 ssh_runner.go:195] Run: which crictl
	I1109 21:30:57.653439  714076 logs.go:123] Gathering logs for kubelet ...
	I1109 21:30:57.653466  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1109 21:30:57.716139  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181233    1365 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.716372  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181276    1365 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.716559  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.181617    1365 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.716766  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.727641  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.727848  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.728175  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:57.728377  714076 logs.go:138] Found kubelet problem: Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:57.754396  714076 logs.go:123] Gathering logs for kube-apiserver [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867] ...
	I1109 21:30:57.754423  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867"
	I1109 21:30:57.790019  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:57.848068  714076 logs.go:123] Gathering logs for coredns [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4] ...
	I1109 21:30:57.848104  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4"
	I1109 21:30:57.892749  714076 logs.go:123] Gathering logs for kube-scheduler [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451] ...
	I1109 21:30:57.892776  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451"
	I1109 21:30:57.954750  714076 logs.go:123] Gathering logs for kube-proxy [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09] ...
	I1109 21:30:57.954786  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09"
	I1109 21:30:57.998305  714076 logs.go:123] Gathering logs for kindnet [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0] ...
	I1109 21:30:57.998396  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0"
	I1109 21:30:58.029050  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:58.051908  714076 logs.go:123] Gathering logs for CRI-O ...
	I1109 21:30:58.051936  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1109 21:30:58.152241  714076 logs.go:123] Gathering logs for dmesg ...
	I1109 21:30:58.152282  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 21:30:58.175912  714076 logs.go:123] Gathering logs for describe nodes ...
	I1109 21:30:58.175943  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1109 21:30:58.291762  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:58.319324  714076 logs.go:123] Gathering logs for etcd [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f] ...
	I1109 21:30:58.319357  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f"
	I1109 21:30:58.383067  714076 logs.go:123] Gathering logs for kube-controller-manager [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794] ...
	I1109 21:30:58.383099  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794"
	I1109 21:30:58.456843  714076 logs.go:123] Gathering logs for container status ...
	I1109 21:30:58.456875  714076 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 21:30:58.518159  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:58.518187  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1109 21:30:58.518240  714076 out.go:239] X Problems detected in kubelet:
	W1109 21:30:58.518253  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.181644    1365 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-386274' and this object
	W1109 21:30:58.518265  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.230667    1365 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:58.518276  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.230711    1365 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-386274" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:58.518285  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: W1109 21:30:00.231597    1365 reflector.go:535] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	W1109 21:30:58.518292  714076 out.go:239]   Nov 09 21:30:00 addons-386274 kubelet[1365]: E1109 21:30:00.231646    1365 reflector.go:147] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-386274" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-386274' and this object
	I1109 21:30:58.518301  714076 out.go:309] Setting ErrFile to fd 2...
	I1109 21:30:58.518335  714076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:30:58.528806  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:58.789927  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:59.028664  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:59.289924  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:30:59.528360  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:30:59.790541  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:00.037122  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:00.290804  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:00.528938  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:00.793000  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:01.029196  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:01.290422  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:01.528276  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:01.791431  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:02.029166  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:02.289760  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:02.528686  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:02.790549  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:03.029175  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:03.290245  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:03.528961  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:03.789422  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:04.028996  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:04.289751  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:04.528208  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:04.789960  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:05.029909  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:05.293439  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:05.529883  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:05.790965  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:06.029215  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:06.289881  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:06.528764  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:06.796323  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:07.029106  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:07.290756  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:07.528910  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:07.792015  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:08.028911  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:08.291411  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:08.528231  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:08.530213  714076 system_pods.go:59] 18 kube-system pods found
	I1109 21:31:08.530242  714076 system_pods.go:61] "coredns-5dd5756b68-n2wnr" [d19385aa-db20-492f-b024-8ff423ea690e] Running
	I1109 21:31:08.530249  714076 system_pods.go:61] "csi-hostpath-attacher-0" [20312281-6fe6-4dc6-bc66-afac09e5a013] Running
	I1109 21:31:08.530255  714076 system_pods.go:61] "csi-hostpath-resizer-0" [13a2c49b-bf5c-463f-8d41-9d2dc9034cb6] Running
	I1109 21:31:08.530260  714076 system_pods.go:61] "csi-hostpathplugin-k4xqr" [9767dfcb-c152-4cb7-8911-c6139d045e6e] Running
	I1109 21:31:08.530265  714076 system_pods.go:61] "etcd-addons-386274" [df77f18e-5eeb-468c-9f21-07128f9b73e5] Running
	I1109 21:31:08.530270  714076 system_pods.go:61] "kindnet-z2mxk" [9f916364-7329-45dc-8c3f-fb1caf280a00] Running
	I1109 21:31:08.530281  714076 system_pods.go:61] "kube-apiserver-addons-386274" [45a5abd0-ee35-4626-8781-b9bee207f5dc] Running
	I1109 21:31:08.530291  714076 system_pods.go:61] "kube-controller-manager-addons-386274" [9e638c89-fad5-4092-adf6-bd340f7b5edf] Running
	I1109 21:31:08.530299  714076 system_pods.go:61] "kube-ingress-dns-minikube" [8e5b7dcc-a0cf-4553-85ec-7196d6f265c1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 21:31:08.530304  714076 system_pods.go:61] "kube-proxy-qrdzs" [ec8c7eb3-2178-4d12-ac1f-420a42e9903b] Running
	I1109 21:31:08.530339  714076 system_pods.go:61] "kube-scheduler-addons-386274" [297f77fb-0be2-4464-9dbf-77d11bb823ab] Running
	I1109 21:31:08.530345  714076 system_pods.go:61] "metrics-server-7c66d45ddc-654wx" [439ff363-c043-404a-af5d-eef3139e8db8] Running
	I1109 21:31:08.530350  714076 system_pods.go:61] "nvidia-device-plugin-daemonset-9nwzl" [5ce43e7e-9d07-4445-80dc-feaf3384dccb] Running
	I1109 21:31:08.530355  714076 system_pods.go:61] "registry-proxy-vg47b" [d3dba9fa-267c-4a65-9efd-566fb91fc9e2] Running
	I1109 21:31:08.530375  714076 system_pods.go:61] "registry-qm6sx" [a3bbea32-b042-4884-87b5-f93606dc9a25] Running
	I1109 21:31:08.530385  714076 system_pods.go:61] "snapshot-controller-58dbcc7b99-72nhx" [c413aaf9-5926-4717-b1fc-afde218c3e07] Running
	I1109 21:31:08.530391  714076 system_pods.go:61] "snapshot-controller-58dbcc7b99-wbtws" [3241a651-be26-45cf-8060-69f7123bee98] Running
	I1109 21:31:08.530399  714076 system_pods.go:61] "storage-provisioner" [c7e3e573-ea52-43b9-a615-ef8c2fad093b] Running
	I1109 21:31:08.530405  714076 system_pods.go:74] duration metric: took 11.237844658s to wait for pod list to return data ...
	I1109 21:31:08.530418  714076 default_sa.go:34] waiting for default service account to be created ...
	I1109 21:31:08.532970  714076 default_sa.go:45] found service account: "default"
	I1109 21:31:08.532997  714076 default_sa.go:55] duration metric: took 2.571583ms for default service account to be created ...
	I1109 21:31:08.533007  714076 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 21:31:08.542781  714076 system_pods.go:86] 18 kube-system pods found
	I1109 21:31:08.542816  714076 system_pods.go:89] "coredns-5dd5756b68-n2wnr" [d19385aa-db20-492f-b024-8ff423ea690e] Running
	I1109 21:31:08.542824  714076 system_pods.go:89] "csi-hostpath-attacher-0" [20312281-6fe6-4dc6-bc66-afac09e5a013] Running
	I1109 21:31:08.542830  714076 system_pods.go:89] "csi-hostpath-resizer-0" [13a2c49b-bf5c-463f-8d41-9d2dc9034cb6] Running
	I1109 21:31:08.543000  714076 system_pods.go:89] "csi-hostpathplugin-k4xqr" [9767dfcb-c152-4cb7-8911-c6139d045e6e] Running
	I1109 21:31:08.543013  714076 system_pods.go:89] "etcd-addons-386274" [df77f18e-5eeb-468c-9f21-07128f9b73e5] Running
	I1109 21:31:08.543019  714076 system_pods.go:89] "kindnet-z2mxk" [9f916364-7329-45dc-8c3f-fb1caf280a00] Running
	I1109 21:31:08.543030  714076 system_pods.go:89] "kube-apiserver-addons-386274" [45a5abd0-ee35-4626-8781-b9bee207f5dc] Running
	I1109 21:31:08.543039  714076 system_pods.go:89] "kube-controller-manager-addons-386274" [9e638c89-fad5-4092-adf6-bd340f7b5edf] Running
	I1109 21:31:08.543054  714076 system_pods.go:89] "kube-ingress-dns-minikube" [8e5b7dcc-a0cf-4553-85ec-7196d6f265c1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 21:31:08.543076  714076 system_pods.go:89] "kube-proxy-qrdzs" [ec8c7eb3-2178-4d12-ac1f-420a42e9903b] Running
	I1109 21:31:08.543085  714076 system_pods.go:89] "kube-scheduler-addons-386274" [297f77fb-0be2-4464-9dbf-77d11bb823ab] Running
	I1109 21:31:08.543093  714076 system_pods.go:89] "metrics-server-7c66d45ddc-654wx" [439ff363-c043-404a-af5d-eef3139e8db8] Running
	I1109 21:31:08.543099  714076 system_pods.go:89] "nvidia-device-plugin-daemonset-9nwzl" [5ce43e7e-9d07-4445-80dc-feaf3384dccb] Running
	I1109 21:31:08.543109  714076 system_pods.go:89] "registry-proxy-vg47b" [d3dba9fa-267c-4a65-9efd-566fb91fc9e2] Running
	I1109 21:31:08.543114  714076 system_pods.go:89] "registry-qm6sx" [a3bbea32-b042-4884-87b5-f93606dc9a25] Running
	I1109 21:31:08.543120  714076 system_pods.go:89] "snapshot-controller-58dbcc7b99-72nhx" [c413aaf9-5926-4717-b1fc-afde218c3e07] Running
	I1109 21:31:08.543128  714076 system_pods.go:89] "snapshot-controller-58dbcc7b99-wbtws" [3241a651-be26-45cf-8060-69f7123bee98] Running
	I1109 21:31:08.543133  714076 system_pods.go:89] "storage-provisioner" [c7e3e573-ea52-43b9-a615-ef8c2fad093b] Running
	I1109 21:31:08.543142  714076 system_pods.go:126] duration metric: took 10.129026ms to wait for k8s-apps to be running ...
	I1109 21:31:08.543154  714076 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 21:31:08.543212  714076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:31:08.559228  714076 system_svc.go:56] duration metric: took 16.066632ms WaitForService to wait for kubelet.
	I1109 21:31:08.559295  714076 kubeadm.go:581] duration metric: took 1m42.710691855s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 21:31:08.559331  714076 node_conditions.go:102] verifying NodePressure condition ...
	I1109 21:31:08.562716  714076 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 21:31:08.562749  714076 node_conditions.go:123] node cpu capacity is 2
	I1109 21:31:08.562764  714076 node_conditions.go:105] duration metric: took 3.420873ms to run NodePressure ...
	I1109 21:31:08.562776  714076 start.go:228] waiting for startup goroutines ...
	I1109 21:31:08.790277  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:09.028910  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:09.292401  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:09.529720  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:09.791051  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:10.029478  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:10.290352  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:10.529642  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:10.793593  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:11.028933  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:11.293431  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:11.529792  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:11.793070  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:12.035862  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:12.291618  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:12.528527  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:12.790566  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:13.029084  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:13.290542  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:13.529389  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:13.791932  714076 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:31:14.030341  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:14.291446  714076 kapi.go:107] duration metric: took 1m42.535066031s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 21:31:14.528079  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:15.030710  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:15.528138  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:16.028014  714076 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 21:31:16.528431  714076 kapi.go:107] duration metric: took 1m40.031074337s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 21:31:16.530851  714076 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-386274 cluster.
	I1109 21:31:16.532984  714076 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 21:31:16.535157  714076 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 21:31:16.537596  714076 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, ingress-dns, metrics-server, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1109 21:31:16.539543  714076 addons.go:502] enable addons completed in 1m51.063327614s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher ingress-dns metrics-server default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1109 21:31:16.539587  714076 start.go:233] waiting for cluster config update ...
	I1109 21:31:16.539607  714076 start.go:242] writing updated cluster config ...
	I1109 21:31:16.539895  714076 ssh_runner.go:195] Run: rm -f paused
	I1109 21:31:16.615521  714076 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1109 21:31:16.617866  714076 out.go:177] * Done! kubectl is now configured to use "addons-386274" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 09 21:37:26 addons-386274 crio[898]: time="2023-11-09 21:37:26.088971666Z" level=info msg="Image docker.io/nginx:alpine not found" id=ae05a940-9cd1-41e2-a635-92ccb40a5b4a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:37:40 addons-386274 crio[898]: time="2023-11-09 21:37:40.088401645Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d8b6c4d5-8998-4a41-bdef-5871c5a1ee22 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:37:40 addons-386274 crio[898]: time="2023-11-09 21:37:40.088618324Z" level=info msg="Image docker.io/nginx:alpine not found" id=d8b6c4d5-8998-4a41-bdef-5871c5a1ee22 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:37:55 addons-386274 crio[898]: time="2023-11-09 21:37:55.087889974Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=64248bf3-dea6-4524-a42c-01eedde99cd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:37:55 addons-386274 crio[898]: time="2023-11-09 21:37:55.088120544Z" level=info msg="Image docker.io/nginx:alpine not found" id=64248bf3-dea6-4524-a42c-01eedde99cd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:37:55 addons-386274 crio[898]: time="2023-11-09 21:37:55.089205977Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=c269a29f-94ce-4d4a-a21b-ec8569aa2014 name=/runtime.v1.ImageService/PullImage
	Nov 09 21:37:55 addons-386274 crio[898]: time="2023-11-09 21:37:55.091219181Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Nov 09 21:39:06 addons-386274 crio[898]: time="2023-11-09 21:39:06.088576173Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7845f4c5-6940-4e99-80c6-559f147f519e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:06 addons-386274 crio[898]: time="2023-11-09 21:39:06.088808721Z" level=info msg="Image docker.io/nginx:alpine not found" id=7845f4c5-6940-4e99-80c6-559f147f519e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:12 addons-386274 crio[898]: time="2023-11-09 21:39:12.117305969Z" level=info msg="Checking image status: registry.k8s.io/pause:3.9" id=4761b2f7-f754-4f8d-98b0-6e669dd9bb8e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:12 addons-386274 crio[898]: time="2023-11-09 21:39:12.117536400Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6 registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097],Size_:520014,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},Info:map[string]string{},}" id=4761b2f7-f754-4f8d-98b0-6e669dd9bb8e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:21 addons-386274 crio[898]: time="2023-11-09 21:39:21.088117938Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d7fba002-c67d-403f-bc96-8d6e2ef719dd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:21 addons-386274 crio[898]: time="2023-11-09 21:39:21.088339024Z" level=info msg="Image docker.io/nginx:alpine not found" id=d7fba002-c67d-403f-bc96-8d6e2ef719dd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:32 addons-386274 crio[898]: time="2023-11-09 21:39:32.088389844Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1cde5c3c-a4c6-4d04-a02a-bbbacf23ba9d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:32 addons-386274 crio[898]: time="2023-11-09 21:39:32.088620997Z" level=info msg="Image docker.io/nginx:alpine not found" id=1cde5c3c-a4c6-4d04-a02a-bbbacf23ba9d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:43 addons-386274 crio[898]: time="2023-11-09 21:39:43.089143990Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=405ac3ed-85d7-4f6c-bcb4-a398f96cdd53 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:43 addons-386274 crio[898]: time="2023-11-09 21:39:43.089363180Z" level=info msg="Image docker.io/nginx:alpine not found" id=405ac3ed-85d7-4f6c-bcb4-a398f96cdd53 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:58 addons-386274 crio[898]: time="2023-11-09 21:39:58.087611223Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=9b113787-5af7-499f-ad15-d9dd723a5b13 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:39:58 addons-386274 crio[898]: time="2023-11-09 21:39:58.087838102Z" level=info msg="Image docker.io/nginx:alpine not found" id=9b113787-5af7-499f-ad15-d9dd723a5b13 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:09 addons-386274 crio[898]: time="2023-11-09 21:40:09.088237096Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4324f036-5e88-400f-9767-876e93497b69 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:09 addons-386274 crio[898]: time="2023-11-09 21:40:09.088458379Z" level=info msg="Image docker.io/nginx:alpine not found" id=4324f036-5e88-400f-9767-876e93497b69 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:20 addons-386274 crio[898]: time="2023-11-09 21:40:20.087804442Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4e421e7e-1274-44bb-86fb-a496902746a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:20 addons-386274 crio[898]: time="2023-11-09 21:40:20.088028957Z" level=info msg="Image docker.io/nginx:alpine not found" id=4e421e7e-1274-44bb-86fb-a496902746a2 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:35 addons-386274 crio[898]: time="2023-11-09 21:40:35.087733003Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e8f2bfb7-cb62-4098-85bb-d59720fd7c13 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:40:35 addons-386274 crio[898]: time="2023-11-09 21:40:35.087978515Z" level=info msg="Image docker.io/nginx:alpine not found" id=e8f2bfb7-cb62-4098-85bb-d59720fd7c13 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4407f1f41a9d       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                             4 minutes ago       Exited              minikube-ingress-dns      6                   87a5d5c9cde52       kube-ingress-dns-minikube
	a71cd83c5fa7f       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9                        8 minutes ago       Running             headlamp                  0                   ebf64c7cc87ef       headlamp-777fd4b855-qwvjk
	9e5079df7e5e5       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 9 minutes ago       Running             gcp-auth                  0                   eee73ad0cf4b8       gcp-auth-d4c87556c-ksqmr
	70a480d1418d6       registry.k8s.io/ingress-nginx/controller@sha256:3cdc716f0395886008c5e49972297adf1af87eeef472f71ff8de11bf53f25766             9 minutes ago       Running             controller                0                   9a50f8d537896       ingress-nginx-controller-7c6974c4d8-pjm6b
	559f937530438       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             9 minutes ago       Exited              patch                     1                   39c8e35cf81f8       ingress-nginx-admission-patch-wq24w
	328a7cc007198       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   9 minutes ago       Exited              create                    0                   42c9952acd28f       ingress-nginx-admission-create-qhzc2
	17cdf5102ce6d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             10 minutes ago      Running             coredns                   0                   8dce3bf761a8f       coredns-5dd5756b68-n2wnr
	e6582ca6e5e88       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             10 minutes ago      Running             storage-provisioner       0                   504648a7c02d3       storage-provisioner
	d378939cda8b3       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             11 minutes ago      Running             kindnet-cni               0                   050029dff52af       kindnet-z2mxk
	aab2087a21f0b       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                             11 minutes ago      Running             kube-proxy                0                   0c7d4e371f321       kube-proxy-qrdzs
	d72fec82447cd       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                             11 minutes ago      Running             kube-scheduler            0                   cf133ac2fae0b       kube-scheduler-addons-386274
	5970fcc46ca24       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                             11 minutes ago      Running             kube-apiserver            0                   586055932d135       kube-apiserver-addons-386274
	32f25b0e47634       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                             11 minutes ago      Running             kube-controller-manager   0                   f105f021b854d       kube-controller-manager-addons-386274
	de421106d18cf       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             11 minutes ago      Running             etcd                      0                   7ffa3d3db6549       etcd-addons-386274
	
	* 
	* ==> coredns [17cdf5102ce6dcf5f3cd94d6c7c534a655e264acf483d44428e472c6944d19c4] <==
	* [INFO] 10.244.0.13:55164 - 14248 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002161814s
	[INFO] 10.244.0.13:54959 - 48833 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145919s
	[INFO] 10.244.0.13:54959 - 54983 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000274878s
	[INFO] 10.244.0.13:59170 - 11409 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000094817s
	[INFO] 10.244.0.13:59170 - 878 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000032066s
	[INFO] 10.244.0.13:38851 - 30150 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041608s
	[INFO] 10.244.0.13:38851 - 14784 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031401s
	[INFO] 10.244.0.13:44616 - 10836 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003899s
	[INFO] 10.244.0.13:44616 - 1110 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031942s
	[INFO] 10.244.0.13:37402 - 37461 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003194422s
	[INFO] 10.244.0.13:37402 - 35675 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00342962s
	[INFO] 10.244.0.13:43272 - 50257 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000065123s
	[INFO] 10.244.0.13:43272 - 31575 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000043339s
	[INFO] 10.244.0.19:52860 - 12162 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000250312s
	[INFO] 10.244.0.19:52762 - 29461 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000489153s
	[INFO] 10.244.0.19:45218 - 59283 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00026106s
	[INFO] 10.244.0.19:54410 - 10697 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000268528s
	[INFO] 10.244.0.19:35211 - 53019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188594s
	[INFO] 10.244.0.19:51013 - 37226 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104393s
	[INFO] 10.244.0.19:49339 - 25557 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004849943s
	[INFO] 10.244.0.19:39220 - 291 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004903029s
	[INFO] 10.244.0.19:57195 - 12842 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001155167s
	[INFO] 10.244.0.19:57410 - 42923 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.0007394s
	[INFO] 10.244.0.21:38301 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000382767s
	[INFO] 10.244.0.21:58531 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000171461s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-386274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-386274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b
	                    minikube.k8s.io/name=addons-386274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_09T21_29_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-386274
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 21:29:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-386274
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 21:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 21:37:21 +0000   Thu, 09 Nov 2023 21:29:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 21:37:21 +0000   Thu, 09 Nov 2023 21:29:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 21:37:21 +0000   Thu, 09 Nov 2023 21:29:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 21:37:21 +0000   Thu, 09 Nov 2023 21:30:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-386274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 35a28fa35cb440e08dd76e4d24fe58f4
	  System UUID:                de92e1c6-da56-44dd-8b1b-62c97b4adb2d
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  gcp-auth                    gcp-auth-d4c87556c-ksqmr                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  headlamp                    headlamp-777fd4b855-qwvjk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m58s
	  ingress-nginx               ingress-nginx-controller-7c6974c4d8-pjm6b    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-n2wnr                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     11m
	  kube-system                 etcd-addons-386274                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-z2mxk                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      11m
	  kube-system                 kube-apiserver-addons-386274                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-addons-386274        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-qrdzs                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-addons-386274                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             310Mi (3%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-386274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-386274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node addons-386274 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node addons-386274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node addons-386274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node addons-386274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node addons-386274 event: Registered Node addons-386274 in Controller
	  Normal  NodeReady                10m                kubelet          Node addons-386274 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001051] FS-Cache: O-key=[8] '495f3b0000000000'
	[  +0.000751] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000e55cb6e2
	[  +0.001113] FS-Cache: N-key=[8] '495f3b0000000000'
	[  +0.006180] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000966] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000000f262c50
	[  +0.001104] FS-Cache: O-key=[8] '495f3b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000002567691d
	[  +0.001074] FS-Cache: N-key=[8] '495f3b0000000000'
	[  +2.382360] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000968] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000003bb74389
	[  +0.001058] FS-Cache: O-key=[8] '485f3b0000000000'
	[  +0.000711] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000e55cb6e2
	[  +0.001078] FS-Cache: N-key=[8] '485f3b0000000000'
	[  +0.437507] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000006d162e6b
	[  +0.001059] FS-Cache: O-key=[8] '4e5f3b0000000000'
	[  +0.000716] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000004321a16c
	[  +0.001093] FS-Cache: N-key=[8] '4e5f3b0000000000'
	
	* 
	* ==> etcd [de421106d18cffb78fb7112bc10ad9060265a8938e6538f60a005cfb5957722f] <==
	* {"level":"warn","ts":"2023-11-09T21:29:29.741014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.710965ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.767762Z","caller":"traceutil/trace.go:171","msg":"trace[12647754] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:399; }","duration":"246.474443ms","start":"2023-11-09T21:29:29.521276Z","end":"2023-11-09T21:29:29.767751Z","steps":["trace[12647754] 'agreement among raft nodes before linearized reading'  (duration: 219.694588ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-09T21:29:29.741098Z","caller":"traceutil/trace.go:171","msg":"trace[2127445337] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"100.576085ms","start":"2023-11-09T21:29:29.640516Z","end":"2023-11-09T21:29:29.741092Z","steps":["trace[2127445337] 'process raft request'  (duration: 59.195435ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-09T21:29:29.795768Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"250.010072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.828434Z","caller":"traceutil/trace.go:171","msg":"trace[755375762] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:399; }","duration":"282.684873ms","start":"2023-11-09T21:29:29.545736Z","end":"2023-11-09T21:29:29.828421Z","steps":["trace[755375762] 'agreement among raft nodes before linearized reading'  (duration: 222.165305ms)","trace[755375762] 'range keys from in-memory index tree'  (duration: 27.829874ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.798095Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-09T21:29:29.424078Z","time spent":"343.579348ms","remote":"127.0.0.1:40160","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":977,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/storageclasses/standard\" mod_revision:0 > success:<request_put:<key:\"/registry/storageclasses/standard\" value_size:936 >> failure:<>"}
	{"level":"warn","ts":"2023-11-09T21:29:29.79846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"155.741091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3755"}
	{"level":"info","ts":"2023-11-09T21:29:29.828771Z","caller":"traceutil/trace.go:171","msg":"trace[2016194844] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:399; }","duration":"186.057175ms","start":"2023-11-09T21:29:29.642704Z","end":"2023-11-09T21:29:29.828761Z","steps":["trace[2016194844] 'agreement among raft nodes before linearized reading'  (duration: 125.345485ms)","trace[2016194844] 'range keys from in-memory index tree'  (duration: 30.359981ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.798625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.367732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.828946Z","caller":"traceutil/trace.go:171","msg":"trace[724211309] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:399; }","duration":"159.688164ms","start":"2023-11-09T21:29:29.66925Z","end":"2023-11-09T21:29:29.828938Z","steps":["trace[724211309] 'agreement among raft nodes before linearized reading'  (duration: 98.778986ms)","trace[724211309] 'range keys from in-memory index tree'  (duration: 30.578153ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.79866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.400755ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.829102Z","caller":"traceutil/trace.go:171","msg":"trace[987559142] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:399; }","duration":"167.837838ms","start":"2023-11-09T21:29:29.661254Z","end":"2023-11-09T21:29:29.829092Z","steps":["trace[987559142] 'agreement among raft nodes before linearized reading'  (duration: 106.783554ms)","trace[987559142] 'range keys from in-memory index tree'  (duration: 30.611359ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.798683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.559507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-qrdzs\" ","response":"range_response_count:1 size:4421"}
	{"level":"info","ts":"2023-11-09T21:29:29.829235Z","caller":"traceutil/trace.go:171","msg":"trace[1711723057] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-qrdzs; range_end:; response_count:1; response_revision:399; }","duration":"168.10909ms","start":"2023-11-09T21:29:29.66112Z","end":"2023-11-09T21:29:29.829229Z","steps":["trace[1711723057] 'agreement among raft nodes before linearized reading'  (duration: 106.923598ms)","trace[1711723057] 'range keys from in-memory index tree'  (duration: 30.622568ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.798786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.380219ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.829353Z","caller":"traceutil/trace.go:171","msg":"trace[888813703] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:399; }","duration":"188.975406ms","start":"2023-11-09T21:29:29.640372Z","end":"2023-11-09T21:29:29.829347Z","steps":["trace[888813703] 'agreement among raft nodes before linearized reading'  (duration: 127.692544ms)","trace[888813703] 'range keys from in-memory index tree'  (duration: 30.710361ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.798844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.389105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-386274\" ","response":"range_response_count:1 size:5743"}
	{"level":"info","ts":"2023-11-09T21:29:29.829489Z","caller":"traceutil/trace.go:171","msg":"trace[167292488] range","detail":"{range_begin:/registry/minions/addons-386274; range_end:; response_count:1; response_revision:399; }","duration":"189.033637ms","start":"2023-11-09T21:29:29.640448Z","end":"2023-11-09T21:29:29.829482Z","steps":["trace[167292488] 'agreement among raft nodes before linearized reading'  (duration: 127.606268ms)","trace[167292488] 'range keys from in-memory index tree'  (duration: 30.755063ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:29.79886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.445629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:29.829647Z","caller":"traceutil/trace.go:171","msg":"trace[238296565] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"189.226891ms","start":"2023-11-09T21:29:29.640411Z","end":"2023-11-09T21:29:29.829638Z","steps":["trace[238296565] 'agreement among raft nodes before linearized reading'  (duration: 127.648409ms)","trace[238296565] 'range keys from in-memory index tree'  (duration: 30.792838ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-09T21:29:30.429169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.913793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-09T21:29:30.429238Z","caller":"traceutil/trace.go:171","msg":"trace[412450932] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:472; }","duration":"106.007536ms","start":"2023-11-09T21:29:30.323208Z","end":"2023-11-09T21:29:30.429216Z","steps":["trace[412450932] 'agreement among raft nodes before linearized reading'  (duration: 76.257217ms)","trace[412450932] 'range keys from in-memory index tree'  (duration: 29.642808ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-09T21:39:06.463677Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1869}
	{"level":"info","ts":"2023-11-09T21:39:06.491592Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1869,"took":"27.33344ms","hash":3080343927}
	{"level":"info","ts":"2023-11-09T21:39:06.491648Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3080343927,"revision":1869,"compact-revision":-1}
	
	* 
	* ==> gcp-auth [9e5079df7e5e5b83b7faecb0cafabb54c4e41b53b38cf0ec9d0aa245a7513357] <==
	* 2023/11/09 21:31:15 GCP Auth Webhook started!
	2023/11/09 21:31:23 Ready to marshal response ...
	2023/11/09 21:31:23 Ready to write response ...
	2023/11/09 21:31:23 Ready to marshal response ...
	2023/11/09 21:31:23 Ready to write response ...
	2023/11/09 21:31:26 Ready to marshal response ...
	2023/11/09 21:31:26 Ready to write response ...
	2023/11/09 21:31:32 Ready to marshal response ...
	2023/11/09 21:31:32 Ready to write response ...
	2023/11/09 21:31:38 Ready to marshal response ...
	2023/11/09 21:31:38 Ready to write response ...
	2023/11/09 21:31:38 Ready to marshal response ...
	2023/11/09 21:31:38 Ready to write response ...
	2023/11/09 21:31:38 Ready to marshal response ...
	2023/11/09 21:31:38 Ready to write response ...
	2023/11/09 21:31:57 Ready to marshal response ...
	2023/11/09 21:31:57 Ready to write response ...
	2023/11/09 21:32:28 Ready to marshal response ...
	2023/11/09 21:32:28 Ready to write response ...
	2023/11/09 21:32:33 Ready to marshal response ...
	2023/11/09 21:32:33 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:40:36 up  4:23,  0 users,  load average: 0.23, 0.62, 1.40
	Linux addons-386274 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [d378939cda8b30399142249a31b5af182bab38810228cdfcf7f9def560d7ebc0] <==
	* I1109 21:38:30.317858       1 main.go:227] handling current node
	I1109 21:38:40.321781       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:38:40.321812       1 main.go:227] handling current node
	I1109 21:38:50.334097       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:38:50.334125       1 main.go:227] handling current node
	I1109 21:39:00.345040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:00.345144       1 main.go:227] handling current node
	I1109 21:39:10.349199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:10.349229       1 main.go:227] handling current node
	I1109 21:39:20.361157       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:20.361183       1 main.go:227] handling current node
	I1109 21:39:30.382207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:30.382235       1 main.go:227] handling current node
	I1109 21:39:40.394199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:40.394229       1 main.go:227] handling current node
	I1109 21:39:50.406829       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:39:50.406930       1 main.go:227] handling current node
	I1109 21:40:00.416419       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:40:00.416447       1 main.go:227] handling current node
	I1109 21:40:10.420394       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:40:10.420422       1 main.go:227] handling current node
	I1109 21:40:20.428539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:40:20.428565       1 main.go:227] handling current node
	I1109 21:40:30.432237       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:40:30.432264       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [5970fcc46ca2430209ad39c6f3ca3ec47e300b4f3181ef71afa4aba9f1147867] <==
	* I1109 21:31:38.031982       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.178.7"}
	E1109 21:31:48.436454       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1109 21:32:08.698654       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1109 21:32:10.067310       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1109 21:32:21.586210       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1109 21:32:21.612857       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1109 21:32:22.638082       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1109 21:32:33.502383       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 21:32:33.852138       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.50.200"}
	I1109 21:32:34.895450       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1109 21:32:45.234811       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.235084       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 21:32:45.312240       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.312401       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 21:32:45.360069       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.360208       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 21:32:45.369647       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.369852       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 21:32:45.383610       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.383724       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1109 21:32:45.393360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1109 21:32:45.393775       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1109 21:32:46.370094       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1109 21:32:46.394346       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1109 21:32:46.410462       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	* 
	* ==> kube-controller-manager [32f25b0e4763480abac599de677b57be6b2fbcb855587a9a058f2b9bf5d83794] <==
	* E1109 21:38:16.376790       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:38:35.512806       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:38:35.512838       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:38:47.579979       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:38:47.580013       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:38:52.398269       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:38:52.398305       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:38:59.575800       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:38:59.575836       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:39:26.899968       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:39:26.900001       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:39:31.023418       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:39:31.023457       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:39:43.149329       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:39:43.149364       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:39:43.828438       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:39:43.828472       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:40:04.111718       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:40:04.111748       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:40:09.630182       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:40:09.630217       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:40:18.467524       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:40:18.467560       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1109 21:40:34.709880       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1109 21:40:34.709913       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [aab2087a21f0bc027ca8e1bcd9f512eadac978b2e52a099434b7a9a367a8ae09] <==
	* I1109 21:29:29.278375       1 server_others.go:69] "Using iptables proxy"
	I1109 21:29:30.092481       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1109 21:29:31.119728       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 21:29:31.122335       1 server_others.go:152] "Using iptables Proxier"
	I1109 21:29:31.122382       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 21:29:31.122391       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 21:29:31.122443       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 21:29:31.122688       1 server.go:846] "Version info" version="v1.28.3"
	I1109 21:29:31.122704       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 21:29:31.123989       1 config.go:188] "Starting service config controller"
	I1109 21:29:31.124011       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 21:29:31.124042       1 config.go:97] "Starting endpoint slice config controller"
	I1109 21:29:31.124046       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 21:29:31.124415       1 config.go:315] "Starting node config controller"
	I1109 21:29:31.124432       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 21:29:31.229944       1 shared_informer.go:318] Caches are synced for service config
	I1109 21:29:31.230530       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 21:29:31.225140       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d72fec82447cdd03b90155185437bc523ff7af718742b9d8f3c4e88398cfc451] <==
	* W1109 21:29:08.901221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 21:29:08.901240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1109 21:29:08.901311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 21:29:08.901330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1109 21:29:08.901404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:29:08.901420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1109 21:29:08.901477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1109 21:29:08.901492       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1109 21:29:08.902094       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1109 21:29:08.902124       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1109 21:29:08.900520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1109 21:29:08.906855       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1109 21:29:09.737218       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 21:29:09.737367       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1109 21:29:09.880904       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 21:29:09.880939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1109 21:29:09.946062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 21:29:09.946177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1109 21:29:09.999438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:29:09.999805       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1109 21:29:09.999761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 21:29:09.999947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1109 21:29:10.033636       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 21:29:10.033757       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1109 21:29:11.971385       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 09 21:39:56 addons-386274 kubelet[1365]: E1109 21:39:56.088265    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(8e5b7dcc-a0cf-4553-85ec-7196d6f265c1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="8e5b7dcc-a0cf-4553-85ec-7196d6f265c1"
	Nov 09 21:39:58 addons-386274 kubelet[1365]: E1109 21:39:58.088053    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="0e834f15-978e-44df-b1cd-629da375aa81"
	Nov 09 21:40:00 addons-386274 kubelet[1365]: E1109 21:40:00.627481    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9bd886f9553f9051a857878ab0a017f6ecbeddf5160450cc75334cfef51a48ae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9bd886f9553f9051a857878ab0a017f6ecbeddf5160450cc75334cfef51a48ae/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:00 addons-386274 kubelet[1365]: E1109 21:40:00.689234    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1ece907d9efdf48e1ee191a3879a06f3499488bda1eea39ecc3525918a6754d6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1ece907d9efdf48e1ee191a3879a06f3499488bda1eea39ecc3525918a6754d6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:00 addons-386274 kubelet[1365]: E1109 21:40:00.981488    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0018a034562d320e6212db76d1a9ae9c4806435ecfeb2223b869b55a828cb527/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0018a034562d320e6212db76d1a9ae9c4806435ecfeb2223b869b55a828cb527/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:09 addons-386274 kubelet[1365]: E1109 21:40:09.088700    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="0e834f15-978e-44df-b1cd-629da375aa81"
	Nov 09 21:40:10 addons-386274 kubelet[1365]: I1109 21:40:10.087942    1365 scope.go:117] "RemoveContainer" containerID="b4407f1f41a9d07cac676ebcc58c195cf4b815e62e8fe573b57b34a82a41fe24"
	Nov 09 21:40:10 addons-386274 kubelet[1365]: E1109 21:40:10.088382    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(8e5b7dcc-a0cf-4553-85ec-7196d6f265c1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="8e5b7dcc-a0cf-4553-85ec-7196d6f265c1"
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.260865    1365 manager.go:1106] Failed to create existing container: /crio-ff1f1fcfcf7ed9984ada08e0bc35c8b61c68911614d526d5605a0a31a80567a3: Error finding container ff1f1fcfcf7ed9984ada08e0bc35c8b61c68911614d526d5605a0a31a80567a3: Status 404 returned error can't find the container with id ff1f1fcfcf7ed9984ada08e0bc35c8b61c68911614d526d5605a0a31a80567a3
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.265100    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1ece907d9efdf48e1ee191a3879a06f3499488bda1eea39ecc3525918a6754d6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1ece907d9efdf48e1ee191a3879a06f3499488bda1eea39ecc3525918a6754d6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.266188    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/67273bec495fe8d91a40d9ea11c7a1715ddf4594b59930080537cacd17842e3f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/67273bec495fe8d91a40d9ea11c7a1715ddf4594b59930080537cacd17842e3f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.267285    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/67273bec495fe8d91a40d9ea11c7a1715ddf4594b59930080537cacd17842e3f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/67273bec495fe8d91a40d9ea11c7a1715ddf4594b59930080537cacd17842e3f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.269438    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fe5336687946b927766b6f982fc312a7cd38de2d6fda5d8830e5777e2fb44d9f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fe5336687946b927766b6f982fc312a7cd38de2d6fda5d8830e5777e2fb44d9f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.278663    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/68d8814784a0d56e3195ce70c306599543c8d8eea4918d2efe307f3ec710ace2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/68d8814784a0d56e3195ce70c306599543c8d8eea4918d2efe307f3ec710ace2/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.281815    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fe5336687946b927766b6f982fc312a7cd38de2d6fda5d8830e5777e2fb44d9f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fe5336687946b927766b6f982fc312a7cd38de2d6fda5d8830e5777e2fb44d9f/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.315730    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7552474c1e0e3149cb17a7140cba3f86d0e400366571e14a1ba9a7b9ab761624/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7552474c1e0e3149cb17a7140cba3f86d0e400366571e14a1ba9a7b9ab761624/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.335043    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/980dad06d43f4fe76d456364e46a4e1d93937cb65a9c5599a07687a94372d789/diff" to get inode usage: stat /var/lib/containers/storage/overlay/980dad06d43f4fe76d456364e46a4e1d93937cb65a9c5599a07687a94372d789/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.338182    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/980dad06d43f4fe76d456364e46a4e1d93937cb65a9c5599a07687a94372d789/diff" to get inode usage: stat /var/lib/containers/storage/overlay/980dad06d43f4fe76d456364e46a4e1d93937cb65a9c5599a07687a94372d789/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.346735    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7552474c1e0e3149cb17a7140cba3f86d0e400366571e14a1ba9a7b9ab761624/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7552474c1e0e3149cb17a7140cba3f86d0e400366571e14a1ba9a7b9ab761624/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.353070    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9bd886f9553f9051a857878ab0a017f6ecbeddf5160450cc75334cfef51a48ae/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9bd886f9553f9051a857878ab0a017f6ecbeddf5160450cc75334cfef51a48ae/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:12 addons-386274 kubelet[1365]: E1109 21:40:12.355390    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0018a034562d320e6212db76d1a9ae9c4806435ecfeb2223b869b55a828cb527/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0018a034562d320e6212db76d1a9ae9c4806435ecfeb2223b869b55a828cb527/diff: no such file or directory, extraDiskErr: <nil>
	Nov 09 21:40:20 addons-386274 kubelet[1365]: E1109 21:40:20.088256    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="0e834f15-978e-44df-b1cd-629da375aa81"
	Nov 09 21:40:23 addons-386274 kubelet[1365]: I1109 21:40:23.087401    1365 scope.go:117] "RemoveContainer" containerID="b4407f1f41a9d07cac676ebcc58c195cf4b815e62e8fe573b57b34a82a41fe24"
	Nov 09 21:40:23 addons-386274 kubelet[1365]: E1109 21:40:23.087666    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(8e5b7dcc-a0cf-4553-85ec-7196d6f265c1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="8e5b7dcc-a0cf-4553-85ec-7196d6f265c1"
	Nov 09 21:40:35 addons-386274 kubelet[1365]: E1109 21:40:35.088262    1365 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="0e834f15-978e-44df-b1cd-629da375aa81"
	
	* 
	* ==> storage-provisioner [e6582ca6e5e88063a0b14a113e62b3fe1dc7bf32761e7638c63d9611a7f6c3af] <==
	* I1109 21:30:01.077242       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 21:30:01.133968       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 21:30:01.134627       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 21:30:01.151937       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 21:30:01.154682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-386274_638c0591-80b9-48e3-86d7-b025ec82170f!
	I1109 21:30:01.157564       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9401a970-af34-49d7-b60d-20eb4a977736", APIVersion:"v1", ResourceVersion:"881", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-386274_638c0591-80b9-48e3-86d7-b025ec82170f became leader
	I1109 21:30:01.255681       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-386274_638c0591-80b9-48e3-86d7-b025ec82170f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-386274 -n addons-386274
helpers_test.go:261: (dbg) Run:  kubectl --context addons-386274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx ingress-nginx-admission-create-qhzc2 ingress-nginx-admission-patch-wq24w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-386274 describe pod nginx ingress-nginx-admission-create-qhzc2 ingress-nginx-admission-patch-wq24w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-386274 describe pod nginx ingress-nginx-admission-create-qhzc2 ingress-nginx-admission-patch-wq24w: exit status 1 (95.515725ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-386274/192.168.49.2
	Start Time:       Thu, 09 Nov 2023 21:32:33 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lsfdp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-lsfdp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m4s                    default-scheduler  Successfully assigned default/nginx to addons-386274
	  Warning  Failed     5m55s (x2 over 7m33s)   kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m5s (x4 over 8m3s)     kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     4m5s (x4 over 7m33s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m5s (x2 over 6m48s)    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m49s (x6 over 7m33s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m57s (x10 over 7m33s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qhzc2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wq24w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-386274 describe pod nginx ingress-nginx-admission-create-qhzc2 ingress-nginx-admission-patch-wq24w: exit status 1
--- FAIL: TestAddons/parallel/Ingress (484.69s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [34231cba-ba97-4740-bce3-cf1d1f86db1a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.028428271s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-133528 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-133528 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-133528 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-133528 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6209db21-1e31-45b6-819d-4c0322a1b61d] Pending
helpers_test.go:344: "sp-pod" [6209db21-1e31-45b6-819d-4c0322a1b61d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1109 21:46:16.646994  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:46:44.421879  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-133528 -n functional-133528
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-11-09 21:47:56.757715328 +0000 UTC m=+1201.499686180
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-133528 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-133528 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-133528/192.168.49.2
Start Time:       Thu, 09 Nov 2023 21:44:56 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqbxp (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-pqbxp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-133528
Warning  Failed     49s (x2 over 2m4s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     49s (x2 over 2m4s)  kubelet            Error: ErrImagePull
Normal   BackOff    35s (x2 over 2m3s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     35s (x2 over 2m3s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    20s (x3 over 3m)    kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-133528 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-133528 logs sp-pod -n default: exit status 1 (106.923978ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-133528 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-133528
helpers_test.go:235: (dbg) docker inspect functional-133528:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64",
	        "Created": "2023-11-09T21:42:03.913432695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 729434,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T21:42:04.237383529Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/hostname",
	        "HostsPath": "/var/lib/docker/containers/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/hosts",
	        "LogPath": "/var/lib/docker/containers/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64-json.log",
	        "Name": "/functional-133528",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-133528:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-133528",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/98a976986ee6009da6a6d497cf2daa0eed8da870c0535e53942089ba34a0dd4f-init/diff:/var/lib/docker/overlay2/7d8c4fc646533218e970cbbc2fae53265551a122428b3ce7f5ec8807d1cc9c68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/98a976986ee6009da6a6d497cf2daa0eed8da870c0535e53942089ba34a0dd4f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/98a976986ee6009da6a6d497cf2daa0eed8da870c0535e53942089ba34a0dd4f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/98a976986ee6009da6a6d497cf2daa0eed8da870c0535e53942089ba34a0dd4f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-133528",
	                "Source": "/var/lib/docker/volumes/functional-133528/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-133528",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-133528",
	                "name.minikube.sigs.k8s.io": "functional-133528",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3b81c84402adf89ff3da9f0dc3e283b5de245095f862e869c27696c002d36429",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33685"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33684"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33681"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33683"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33682"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3b81c84402ad",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-133528": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "200da875897b",
	                        "functional-133528"
	                    ],
	                    "NetworkID": "9827b6e5e9ad8e3a0329ad5aaa3a69639c584fa3317cc74e3f6961619eff2bbc",
	                    "EndpointID": "dc6036ef0f0ce53964c5922a888e5c7fb87d869903980e5ef439b6973ee6a8da",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-133528 -n functional-133528
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 logs -n 25: (1.783243939s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-133528 ssh sudo                                               | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-133528                                                        | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-133528 ssh                                                    | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-133528 cache reload                                           | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	| ssh     | functional-133528 ssh                                                    | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-133528 kubectl --                                             | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | --context functional-133528                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-133528                                                     | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                           | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | functional-133528                                                        |                   |         |         |                     |                     |
	| cp      | functional-133528 cp                                                     | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-133528 config unset                                           | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-133528 config get                                             | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-133528 config set                                             | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | cpus 2                                                                   |                   |         |         |                     |                     |
	| config  | functional-133528 config get                                             | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| config  | functional-133528 config unset                                           | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-133528 ssh -n                                                 | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | functional-133528 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config  | functional-133528 config get                                             | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | cpus                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-133528 ssh echo                                               | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | hello                                                                    |                   |         |         |                     |                     |
	| cp      | functional-133528 cp                                                     | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | functional-133528:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|         | /tmp/TestFunctionalparallelCpCmd1643400076/001/cp-test.txt               |                   |         |         |                     |                     |
	| ssh     | functional-133528 ssh cat                                                | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | /etc/hostname                                                            |                   |         |         |                     |                     |
	| ssh     | functional-133528 ssh -n                                                 | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC | 09 Nov 23 21:44 UTC |
	|         | functional-133528 sudo cat                                               |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| tunnel  | functional-133528 tunnel                                                 | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-133528 tunnel                                                 | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel  | functional-133528 tunnel                                                 | functional-133528 | jenkins | v1.32.0 | 09 Nov 23 21:44 UTC |                     |
	|         | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:44:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:44:04.673845  734306 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:44:04.673993  734306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:44:04.673997  734306 out.go:309] Setting ErrFile to fd 2...
	I1109 21:44:04.674002  734306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:44:04.674252  734306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:44:04.674644  734306 out.go:303] Setting JSON to false
	I1109 21:44:04.675689  734306 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15994,"bootTime":1699550250,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:44:04.675751  734306 start.go:138] virtualization:  
	I1109 21:44:04.679724  734306 out.go:177] * [functional-133528] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:44:04.681517  734306 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:44:04.683504  734306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:44:04.681710  734306 notify.go:220] Checking for updates...
	I1109 21:44:04.687427  734306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:44:04.689290  734306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:44:04.690962  734306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:44:04.693021  734306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:44:04.695336  734306 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:44:04.695451  734306 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:44:04.720683  734306 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:44:04.720784  734306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:44:04.805206  734306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-09 21:44:04.793509247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:44:04.805309  734306 docker.go:295] overlay module found
	I1109 21:44:04.807291  734306 out.go:177] * Using the docker driver based on existing profile
	I1109 21:44:04.809383  734306 start.go:298] selected driver: docker
	I1109 21:44:04.809400  734306 start.go:902] validating driver "docker" against &{Name:functional-133528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:44:04.809495  734306 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:44:04.809601  734306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:44:04.876650  734306 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-09 21:44:04.867478712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:44:04.877023  734306 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 21:44:04.877065  734306 cni.go:84] Creating CNI manager for ""
	I1109 21:44:04.877072  734306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:44:04.877081  734306 start_flags.go:323] config:
	{Name:functional-133528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:44:04.879409  734306 out.go:177] * Starting control plane node functional-133528 in cluster functional-133528
	I1109 21:44:04.881380  734306 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:44:04.883218  734306 out.go:177] * Pulling base image ...
	I1109 21:44:04.884990  734306 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:44:04.885036  734306 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1109 21:44:04.885043  734306 cache.go:56] Caching tarball of preloaded images
	I1109 21:44:04.885077  734306 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:44:04.885127  734306 preload.go:174] Found /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 21:44:04.885135  734306 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1109 21:44:04.885257  734306 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/config.json ...
	I1109 21:44:04.903243  734306 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1109 21:44:04.903259  734306 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1109 21:44:04.903282  734306 cache.go:194] Successfully downloaded all kic artifacts
	I1109 21:44:04.903330  734306 start.go:365] acquiring machines lock for functional-133528: {Name:mk129e1fd0fb10ee16aa09ba80a07c0254311d9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 21:44:04.903412  734306 start.go:369] acquired machines lock for "functional-133528" in 49.525µs
	I1109 21:44:04.903443  734306 start.go:96] Skipping create...Using existing machine configuration
	I1109 21:44:04.903465  734306 fix.go:54] fixHost starting: 
	I1109 21:44:04.903773  734306 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
	I1109 21:44:04.927613  734306 fix.go:102] recreateIfNeeded on functional-133528: state=Running err=<nil>
	W1109 21:44:04.927632  734306 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 21:44:04.929866  734306 out.go:177] * Updating the running docker "functional-133528" container ...
	I1109 21:44:04.932131  734306 machine.go:88] provisioning docker machine ...
	I1109 21:44:04.932147  734306 ubuntu.go:169] provisioning hostname "functional-133528"
	I1109 21:44:04.932223  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:04.952573  734306 main.go:141] libmachine: Using SSH client type: native
	I1109 21:44:04.953004  734306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33685 <nil> <nil>}
	I1109 21:44:04.953015  734306 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-133528 && echo "functional-133528" | sudo tee /etc/hostname
	I1109 21:44:05.109867  734306 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-133528
	
	I1109 21:44:05.109949  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:05.128195  734306 main.go:141] libmachine: Using SSH client type: native
	I1109 21:44:05.128609  734306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33685 <nil> <nil>}
	I1109 21:44:05.128625  734306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-133528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-133528/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-133528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 21:44:05.271299  734306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 21:44:05.271319  734306 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 21:44:05.271335  734306 ubuntu.go:177] setting up certificates
	I1109 21:44:05.271345  734306 provision.go:83] configureAuth start
	I1109 21:44:05.271411  734306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-133528
	I1109 21:44:05.290352  734306 provision.go:138] copyHostCerts
	I1109 21:44:05.290403  734306 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 21:44:05.290410  734306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 21:44:05.290481  734306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 21:44:05.290578  734306 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 21:44:05.290582  734306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 21:44:05.290606  734306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 21:44:05.290662  734306 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 21:44:05.290666  734306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 21:44:05.290688  734306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 21:44:05.290741  734306 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.functional-133528 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube functional-133528]
	I1109 21:44:05.954905  734306 provision.go:172] copyRemoteCerts
	I1109 21:44:05.954960  734306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 21:44:05.955004  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:05.972627  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:06.081488  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 21:44:06.114891  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 21:44:06.144732  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1109 21:44:06.174385  734306 provision.go:86] duration metric: configureAuth took 902.958424ms
	I1109 21:44:06.174402  734306 ubuntu.go:193] setting minikube options for container-runtime
	I1109 21:44:06.174596  734306 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:44:06.174700  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:06.192381  734306 main.go:141] libmachine: Using SSH client type: native
	I1109 21:44:06.192774  734306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33685 <nil> <nil>}
	I1109 21:44:06.192786  734306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 21:44:11.665612  734306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 21:44:11.665627  734306 machine.go:91] provisioned docker machine in 6.733488626s
	I1109 21:44:11.665636  734306 start.go:300] post-start starting for "functional-133528" (driver="docker")
	I1109 21:44:11.665647  734306 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 21:44:11.665723  734306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 21:44:11.665765  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:11.684887  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:11.784894  734306 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 21:44:11.788931  734306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 21:44:11.788958  734306 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 21:44:11.788968  734306 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 21:44:11.788973  734306 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 21:44:11.788985  734306 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 21:44:11.789036  734306 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 21:44:11.789112  734306 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 21:44:11.789187  734306 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/test/nested/copy/713573/hosts -> hosts in /etc/test/nested/copy/713573
	I1109 21:44:11.789229  734306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/713573
	I1109 21:44:11.800133  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:44:11.828571  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/test/nested/copy/713573/hosts --> /etc/test/nested/copy/713573/hosts (40 bytes)
	I1109 21:44:11.857321  734306 start.go:303] post-start completed in 191.670277ms
	I1109 21:44:11.857393  734306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 21:44:11.857429  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:11.874790  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:11.972646  734306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 21:44:11.978695  734306 fix.go:56] fixHost completed within 7.075239695s
	I1109 21:44:11.978710  734306 start.go:83] releasing machines lock for "functional-133528", held for 7.075289786s
	I1109 21:44:11.978779  734306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-133528
	I1109 21:44:12.000402  734306 ssh_runner.go:195] Run: cat /version.json
	I1109 21:44:12.000450  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:12.000661  734306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 21:44:12.000729  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:12.030875  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:12.034047  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:12.132655  734306 ssh_runner.go:195] Run: systemctl --version
	I1109 21:44:12.278540  734306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 21:44:12.430089  734306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 21:44:12.435662  734306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:44:12.446625  734306 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 21:44:12.446697  734306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:44:12.457484  734306 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 21:44:12.457498  734306 start.go:472] detecting cgroup driver to use...
	I1109 21:44:12.457528  734306 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 21:44:12.457576  734306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 21:44:12.473350  734306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 21:44:12.488479  734306 docker.go:203] disabling cri-docker service (if available) ...
	I1109 21:44:12.488540  734306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 21:44:12.506957  734306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 21:44:12.521289  734306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 21:44:12.733189  734306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 21:44:12.899552  734306 docker.go:219] disabling docker service ...
	I1109 21:44:12.899609  734306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 21:44:12.915772  734306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 21:44:12.928923  734306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 21:44:13.065763  734306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 21:44:13.200224  734306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 21:44:13.213572  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 21:44:13.233710  734306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 21:44:13.233790  734306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:44:13.245879  734306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 21:44:13.245938  734306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:44:13.258281  734306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:44:13.270615  734306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:44:13.282547  734306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 21:44:13.293207  734306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 21:44:13.304068  734306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 21:44:13.314579  734306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 21:44:13.437926  734306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 21:44:13.599030  734306 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 21:44:13.599103  734306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 21:44:13.603928  734306 start.go:540] Will wait 60s for crictl version
	I1109 21:44:13.603980  734306 ssh_runner.go:195] Run: which crictl
	I1109 21:44:13.608122  734306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 21:44:13.660558  734306 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 21:44:13.660632  734306 ssh_runner.go:195] Run: crio --version
	I1109 21:44:13.705280  734306 ssh_runner.go:195] Run: crio --version
	I1109 21:44:13.749913  734306 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1109 21:44:13.752491  734306 cli_runner.go:164] Run: docker network inspect functional-133528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:44:13.770028  734306 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 21:44:13.776936  734306 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1109 21:44:13.779094  734306 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:44:13.779158  734306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:44:13.830790  734306 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 21:44:13.830803  734306 crio.go:415] Images already preloaded, skipping extraction
	I1109 21:44:13.830855  734306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:44:13.870870  734306 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 21:44:13.870894  734306 cache_images.go:84] Images are preloaded, skipping loading
	I1109 21:44:13.870975  734306 ssh_runner.go:195] Run: crio config
	I1109 21:44:13.927847  734306 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1109 21:44:13.927874  734306 cni.go:84] Creating CNI manager for ""
	I1109 21:44:13.927883  734306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:44:13.927895  734306 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 21:44:13.927919  734306 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-133528 NodeName:functional-133528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 21:44:13.928053  734306 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-133528"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 21:44:13.928137  734306 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=functional-133528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I1109 21:44:13.928200  734306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1109 21:44:13.938623  734306 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 21:44:13.938691  734306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 21:44:13.949073  734306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (427 bytes)
	I1109 21:44:13.969920  734306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 21:44:13.990555  734306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1948 bytes)
	I1109 21:44:14.012537  734306 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 21:44:14.017201  734306 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528 for IP: 192.168.49.2
	I1109 21:44:14.017224  734306 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:44:14.017378  734306 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 21:44:14.017419  734306 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 21:44:14.017497  734306 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.key
	I1109 21:44:14.017546  734306 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/apiserver.key.dd3b5fb2
	I1109 21:44:14.017592  734306 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/proxy-client.key
	I1109 21:44:14.017703  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem (1338 bytes)
	W1109 21:44:14.017728  734306 certs.go:433] ignoring /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573_empty.pem, impossibly tiny 0 bytes
	I1109 21:44:14.017736  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 21:44:14.017758  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 21:44:14.017787  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 21:44:14.017808  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 21:44:14.017853  734306 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:44:14.018515  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 21:44:14.046969  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 21:44:14.074984  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 21:44:14.102879  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 21:44:14.130174  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 21:44:14.158341  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 21:44:14.185814  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 21:44:14.213690  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 21:44:14.240764  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /usr/share/ca-certificates/7135732.pem (1708 bytes)
	I1109 21:44:14.268396  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 21:44:14.295443  734306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem --> /usr/share/ca-certificates/713573.pem (1338 bytes)
	I1109 21:44:14.323000  734306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 21:44:14.343865  734306 ssh_runner.go:195] Run: openssl version
	I1109 21:44:14.350927  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135732.pem && ln -fs /usr/share/ca-certificates/7135732.pem /etc/ssl/certs/7135732.pem"
	I1109 21:44:14.362419  734306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135732.pem
	I1109 21:44:14.366870  734306 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 21:44:14.366926  734306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135732.pem
	I1109 21:44:14.375476  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7135732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 21:44:14.385751  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 21:44:14.396813  734306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:44:14.401280  734306 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:44:14.401337  734306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:44:14.409693  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 21:44:14.420244  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/713573.pem && ln -fs /usr/share/ca-certificates/713573.pem /etc/ssl/certs/713573.pem"
	I1109 21:44:14.431671  734306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/713573.pem
	I1109 21:44:14.436131  734306 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 21:44:14.436188  734306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/713573.pem
	I1109 21:44:14.444546  734306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/713573.pem /etc/ssl/certs/51391683.0"
	I1109 21:44:14.455010  734306 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 21:44:14.459305  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 21:44:14.467404  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 21:44:14.475360  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 21:44:14.483170  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 21:44:14.491219  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 21:44:14.499481  734306 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 21:44:14.507750  734306 kubeadm.go:404] StartCluster: {Name:functional-133528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:44:14.507827  734306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 21:44:14.507886  734306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:44:14.551917  734306 cri.go:89] found id: "2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2"
	I1109 21:44:14.551930  734306 cri.go:89] found id: "5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61"
	I1109 21:44:14.551935  734306 cri.go:89] found id: "92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f"
	I1109 21:44:14.551938  734306 cri.go:89] found id: "c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365"
	I1109 21:44:14.551950  734306 cri.go:89] found id: "b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529"
	I1109 21:44:14.551954  734306 cri.go:89] found id: "9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840"
	I1109 21:44:14.551957  734306 cri.go:89] found id: "5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9"
	I1109 21:44:14.551960  734306 cri.go:89] found id: "8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0"
	I1109 21:44:14.551964  734306 cri.go:89] found id: ""
	I1109 21:44:14.552010  734306 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 21:44:14.576988  734306 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2/userdata","rootfs":"/var/lib/containers/storage/overlay/c059f2243b54fa26deb822a6cf790b7710f35ad11c3a43f325980fc5bb4b41c5/merged","created":"2023-11-09T21:44:04.247868842Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"93a25f6c","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"3","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"93a25f6c\",\"io.kubernetes.container.restartCount\":\"3\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.termi
nationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:44:04.173006033Z","io.kubernetes.cri-o.Image":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"34231cba-ba97-4740-bce3-cf1d1f86db1a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_34231cba-ba97-4740-bce3-cf1d1f86db1a/storage-provisioner/3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisio
ner\",\"attempt\":3}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c059f2243b54fa26deb822a6cf790b7710f35ad11c3a43f325980fc5bb4b41c5/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_34231cba-ba97-4740-bce3-cf1d1f86db1a_3","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_34231cba-ba97-4740-bce3-cf1d1f86db1a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/34231cba-ba97-4740-bce3-cf1d1f86db1a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/34231cba-ba97-4740-bce3-cf1d1f86db1a/containers/storage-provisioner/1d11fcdd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/34231cba-ba97-4740-bce3-cf1d1f86db1a/volumes/kubernetes.io~projected/kube-api-access-2lggr\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"34231cba-ba97-4740-bce3-cf1d1f86db1a","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-te
st\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-11-09T21:43:11.162998833Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61/userdata","rootfs":"/var/lib/containers/storage/overlay/0c93a501fa37a184bfd135bc8894934e73cb9d1779d3f5f6a8c8d22e25d44303/merged","created":"2023-11-09T21:4
3:54.21929948Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"904a1eea","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"904a1eea\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/d
ev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:54.170939842Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-v5sb8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d17fe4a6-5f21-4dc8-adeb-df67b00b311d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-v5sb8_d17fe4a6-5f21-4dc8-adeb-df67b00b311d/
coredns/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0c93a501fa37a184bfd135bc8894934e73cb9d1779d3f5f6a8c8d22e25d44303/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-v5sb8_kube-system_d17fe4a6-5f21-4dc8-adeb-df67b00b311d_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-v5sb8_kube-system_d17fe4a6-5f21-4dc8-adeb-df67b00b311d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/d17fe4a6-5f21-4dc8-adeb-df67b00b311d/vo
lumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d17fe4a6-5f21-4dc8-adeb-df67b00b311d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d17fe4a6-5f21-4dc8-adeb-df67b00b311d/containers/coredns/b97fbddf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d17fe4a6-5f21-4dc8-adeb-df67b00b311d/volumes/kubernetes.io~projected/kube-api-access-smdjf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-v5sb8","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d17fe4a6-5f21-4dc8-adeb-df67b00b311d","kubernetes.io/config.seen":"2023-11-09T21
:43:11.159032206Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9/userdata","rootfs":"/var/lib/containers/storage/overlay/72dfae76ff7ab0134e3c512b8823299da333c9facb63de0950b8a705b04564a6/merged","created":"2023-11-09T21:43:34.782288572Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"de3a6ef5","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"de3a6ef5\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.contain
er.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.526536756Z","io.kubernetes.cri-o.Image":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.3","io.kubernetes.cri-o.ImageRef":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-functional-133528\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1cea9dac62aafe43e9805d9820d5d702\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-functional-133528_1cea9dac62aafe43e9805d9820d5d702/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube
-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72dfae76ff7ab0134e3c512b8823299da333c9facb63de0950b8a705b04564a6/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-functional-133528_kube-system_1cea9dac62aafe43e9805d9820d5d702_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-functional-133528_kube-system_1cea9dac62aafe43e9805d9820d5d702_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1cea9dac62aafe43e9805d9820d5d702/containers/kube-apiserver/6171a132\",\"
readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1cea9dac62aafe43e9805d9820d5d702/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernete
s.pod.name":"kube-apiserver-functional-133528","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1cea9dac62aafe43e9805d9820d5d702","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1cea9dac62aafe43e9805d9820d5d702","kubernetes.io/config.seen":"2023-11-09T21:42:18.716070809Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0/userdata","rootfs":"/var/lib/containers/storage/overlay/2821e275faa685397ab43b169557d920507ac648634ea6533e652a655d06d473/merged","created":"2023-11-09T21:43:34.82275717Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"83906433","io.kubernetes.container.name":"kube-controller-manager","io
.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"83906433\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.450076903Z","io.kubernetes.cri-o.Image":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.3","io.kubernetes.cri-o.ImageRef":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","io.kubernetes.cri-o.Labels":"{\"io.kubernetes
.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-functional-133528\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ba06f9ae1959b5f2c084936d7a025921\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-functional-133528_ba06f9ae1959b5f2c084936d7a025921/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2821e275faa685397ab43b169557d920507ac648634ea6533e652a655d06d473/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-functional-133528_kube-system_ba06f9ae1959b5f2c084936d7a025921_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85b
db0436c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-functional-133528_kube-system_ba06f9ae1959b5f2c084936d7a025921_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ba06f9ae1959b5f2c084936d7a025921/containers/kube-controller-manager/3cfffada\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ba06f9ae1959b5f2c084936d7a025921/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"contain
er_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-functional-133528","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod
":"30","io.kubernetes.pod.uid":"ba06f9ae1959b5f2c084936d7a025921","kubernetes.io/config.hash":"ba06f9ae1959b5f2c084936d7a025921","kubernetes.io/config.seen":"2023-11-09T21:42:18.716071933Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f/userdata","rootfs":"/var/lib/containers/storage/overlay/939b0dd3c4b1bae5dc144a5cdd2c1ec286ba319288f19c0b695cf057f53c060f/merged","created":"2023-11-09T21:43:34.851129768Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2f32bf5d","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash
\":\"2f32bf5d\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.661664499Z","io.kubernetes.cri-o.Image":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-functional-133528\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"2b47656032291862c29df3174e8d507e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd
-functional-133528_2b47656032291862c29df3174e8d507e/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/939b0dd3c4b1bae5dc144a5cdd2c1ec286ba319288f19c0b695cf057f53c060f/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-functional-133528_kube-system_2b47656032291862c29df3174e8d507e_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956","io.kubernetes.cri-o.SandboxName":"k8s_etcd-functional-133528_kube-system_2b47656032291862c29df3174e8d507e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/2b47656032
291862c29df3174e8d507e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/2b47656032291862c29df3174e8d507e/containers/etcd/49a5f6ba\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-functional-133528","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"2b47656032291862c29df3174e8d507e","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"2b47656032291862c29df3174e8d507e","kubernetes.io/config.seen":"2023-11-09T21:42:18.71606929
9Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840/userdata","rootfs":"/var/lib/containers/storage/overlay/8f3050aef0dc422f60cf17a2faa727dfde640ead29b392b474bb7c1119261525/merged","created":"2023-11-09T21:43:34.807782921Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"55b45d6f","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"55b45d6f\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.561602888Z","io.kubernetes.cri-o.Image":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-dl9h4\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"dc246843-542b-4d30-835e-3271c0bc77b9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-dl9h4_dc246843-542b-4d30-835e-3271c0bc77b9/kindnet-cni/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":2}","io.kubernetes.cri-o
.MountPoint":"/var/lib/containers/storage/overlay/8f3050aef0dc422f60cf17a2faa727dfde640ead29b392b474bb7c1119261525/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-dl9h4_kube-system_dc246843-542b-4d30-835e-3271c0bc77b9_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-dl9h4_kube-system_dc246843-542b-4d30-835e-3271c0bc77b9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propaga
tion\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/dc246843-542b-4d30-835e-3271c0bc77b9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/dc246843-542b-4d30-835e-3271c0bc77b9/containers/kindnet-cni/5b43c23e\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/dc246843-542b-4d30-835e-3271c0bc77b9/volumes/kubernetes.io~projected/kube-api-access-rbnm7\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-dl9h4","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"dc246843-542b-4d30-835e-
3271c0bc77b9","kubernetes.io/config.seen":"2023-11-09T21:42:38.660592794Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529/userdata","rootfs":"/var/lib/containers/storage/overlay/bd1eb3e3b2c671ef293d5c81a2e92459ba4d8de31e903e3827cf86db95894079/merged","created":"2023-11-09T21:43:34.732282873Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1a68c1c3","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1a68c1c3\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessag
ePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.579481404Z","io.kubernetes.cri-o.Image":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.3","io.kubernetes.cri-o.ImageRef":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-functional-133528\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"07308619ce1731ed35aae00793bedffa\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-functional-133528_07308619ce1731ed35aae00793bedffa/kube-scheduler
/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/bd1eb3e3b2c671ef293d5c81a2e92459ba4d8de31e903e3827cf86db95894079/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-functional-133528_kube-system_07308619ce1731ed35aae00793bedffa_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-functional-133528_kube-system_07308619ce1731ed35aae00793bedffa_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/07308619ce1731ed35aae00793
bedffa/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/07308619ce1731ed35aae00793bedffa/containers/kube-scheduler/22227248\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-functional-133528","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"07308619ce1731ed35aae00793bedffa","kubernetes.io/config.hash":"07308619ce1731ed35aae00793bedffa","kubernetes.io/config.seen":"2023-11-09T21:42:18.716063671Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay
-containers/c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365/userdata","rootfs":"/var/lib/containers/storage/overlay/7299826675d0dfb059ed4dd7c07b6228bc60c8b46c68ff73f1705e113f7ae241/merged","created":"2023-11-09T21:43:34.795901484Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8d9c99c8","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8d9c99c8\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365","io.kubernetes.cri-o.ContainerType":"container",
"io.kubernetes.cri-o.Created":"2023-11-09T21:43:34.62022017Z","io.kubernetes.cri-o.Image":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.3","io.kubernetes.cri-o.ImageRef":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-mkncf\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b9248010-d001-480c-9955-fbde48cfc39c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-mkncf_b9248010-d001-480c-9955-fbde48cfc39c/kube-proxy/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/7299826675d0dfb059ed4dd7c07b6228bc60c8b46c68ff73f1705e113f7ae241/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-mkncf_kube-system_b9248010-d001-480c-9955-fbde48cfc39c_2"
,"io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-mkncf_kube-system_b9248010-d001-480c-9955-fbde48cfc39c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b9248010-d001-480c-9955-fbde48cfc39c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"cont
ainer_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b9248010-d001-480c-9955-fbde48cfc39c/containers/kube-proxy/8415ba30\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/b9248010-d001-480c-9955-fbde48cfc39c/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/b9248010-d001-480c-9955-fbde48cfc39c/volumes/kubernetes.io~projected/kube-api-access-p8dkh\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-mkncf","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b9248010-d001-480c-9955-fbde48cfc39c","kubernetes.io/config.seen":"2023-11-09T21:42:38.649155637Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1109 21:44:14.577582  734306 cri.go:126] list returned 8 containers
	I1109 21:44:14.577594  734306 cri.go:129] container: {ID:2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2 Status:stopped}
	I1109 21:44:14.577607  734306 cri.go:135] skipping {2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577615  734306 cri.go:129] container: {ID:5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61 Status:stopped}
	I1109 21:44:14.577622  734306 cri.go:135] skipping {5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577628  734306 cri.go:129] container: {ID:5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9 Status:stopped}
	I1109 21:44:14.577633  734306 cri.go:135] skipping {5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577638  734306 cri.go:129] container: {ID:8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0 Status:stopped}
	I1109 21:44:14.577644  734306 cri.go:135] skipping {8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577649  734306 cri.go:129] container: {ID:92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f Status:stopped}
	I1109 21:44:14.577656  734306 cri.go:135] skipping {92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577661  734306 cri.go:129] container: {ID:9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840 Status:stopped}
	I1109 21:44:14.577667  734306 cri.go:135] skipping {9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577672  734306 cri.go:129] container: {ID:b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529 Status:stopped}
	I1109 21:44:14.577677  734306 cri.go:135] skipping {b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577682  734306 cri.go:129] container: {ID:c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365 Status:stopped}
	I1109 21:44:14.577687  734306 cri.go:135] skipping {c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365 stopped}: state = "stopped", want "paused"
	I1109 21:44:14.577737  734306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 21:44:14.588609  734306 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1109 21:44:14.588621  734306 kubeadm.go:636] restartCluster start
	I1109 21:44:14.588677  734306 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 21:44:14.598693  734306 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 21:44:14.599228  734306 kubeconfig.go:92] found "functional-133528" server: "https://192.168.49.2:8441"
	I1109 21:44:14.600893  734306 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 21:44:14.611322  734306 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-11-09 21:42:10.296234969 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-11-09 21:44:14.007391010 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I1109 21:44:14.611331  734306 kubeadm.go:1128] stopping kube-system containers ...
	I1109 21:44:14.611345  734306 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1109 21:44:14.611399  734306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:44:14.652833  734306 cri.go:89] found id: "2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2"
	I1109 21:44:14.652846  734306 cri.go:89] found id: "5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61"
	I1109 21:44:14.652850  734306 cri.go:89] found id: "92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f"
	I1109 21:44:14.652856  734306 cri.go:89] found id: "c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365"
	I1109 21:44:14.652859  734306 cri.go:89] found id: "b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529"
	I1109 21:44:14.652863  734306 cri.go:89] found id: "9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840"
	I1109 21:44:14.652866  734306 cri.go:89] found id: "5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9"
	I1109 21:44:14.652869  734306 cri.go:89] found id: "8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0"
	I1109 21:44:14.652872  734306 cri.go:89] found id: ""
	I1109 21:44:14.652877  734306 cri.go:234] Stopping containers: [2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2 5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61 92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365 b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529 9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840 5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9 8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0]
	I1109 21:44:14.652936  734306 ssh_runner.go:195] Run: which crictl
	I1109 21:44:14.657569  734306 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2 5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61 92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365 b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529 9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840 5c0a740ce002678ff43309ed39b571b21d7b94ef2a5607a05a55780ecfbe95b9 8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0
	I1109 21:44:14.725184  734306 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 21:44:14.825621  734306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 21:44:14.836859  734306 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  9 21:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  9 21:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  9 21:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  9 21:42 /etc/kubernetes/scheduler.conf
	
	I1109 21:44:14.836919  734306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1109 21:44:14.847993  734306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1109 21:44:14.859601  734306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1109 21:44:14.870858  734306 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 21:44:14.870915  734306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 21:44:14.882232  734306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1109 21:44:14.893338  734306 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 21:44:14.893409  734306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 21:44:14.904831  734306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 21:44:14.916086  734306 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 21:44:14.916101  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:14.981788  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:16.485286  734306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.50347338s)
	I1109 21:44:16.485305  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:16.691402  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:16.772973  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:16.915755  734306 api_server.go:52] waiting for apiserver process to appear ...
	I1109 21:44:16.915817  734306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:44:16.939626  734306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:44:17.452903  734306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:44:17.953085  734306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:44:17.983612  734306 api_server.go:72] duration metric: took 1.067854469s to wait for apiserver process to appear ...
	I1109 21:44:17.983625  734306 api_server.go:88] waiting for apiserver healthz status ...
	I1109 21:44:17.983640  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:21.795196  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 21:44:21.795223  734306 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 21:44:21.795233  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:21.976924  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 21:44:21.976943  734306 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 21:44:22.477572  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:22.487636  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1109 21:44:22.487656  734306 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1109 21:44:22.977239  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:22.992346  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1109 21:44:22.992366  734306 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1109 21:44:23.477581  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:23.487975  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1109 21:44:23.505230  734306 api_server.go:141] control plane version: v1.28.3
	I1109 21:44:23.505249  734306 api_server.go:131] duration metric: took 5.521618147s to wait for apiserver health ...
	I1109 21:44:23.505256  734306 cni.go:84] Creating CNI manager for ""
	I1109 21:44:23.505264  734306 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:44:23.507519  734306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 21:44:23.509077  734306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 21:44:23.514841  734306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1109 21:44:23.514852  734306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 21:44:23.543486  734306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 21:44:24.328993  734306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 21:44:24.347447  734306 system_pods.go:59] 8 kube-system pods found
	I1109 21:44:24.347470  734306 system_pods.go:61] "coredns-5dd5756b68-v5sb8" [d17fe4a6-5f21-4dc8-adeb-df67b00b311d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 21:44:24.347478  734306 system_pods.go:61] "etcd-functional-133528" [dcd780e1-b7d5-4113-89d9-b9a9c5cbad99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 21:44:24.347483  734306 system_pods.go:61] "kindnet-dl9h4" [dc246843-542b-4d30-835e-3271c0bc77b9] Running
	I1109 21:44:24.347493  734306 system_pods.go:61] "kube-apiserver-functional-133528" [298e7c71-5e96-4310-aff0-1947ede1dd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 21:44:24.347499  734306 system_pods.go:61] "kube-controller-manager-functional-133528" [cd177459-459d-43d4-b888-3c0b1c204bc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 21:44:24.347504  734306 system_pods.go:61] "kube-proxy-mkncf" [b9248010-d001-480c-9955-fbde48cfc39c] Running
	I1109 21:44:24.347509  734306 system_pods.go:61] "kube-scheduler-functional-133528" [fbf27352-e261-443a-ad1f-59047caae401] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 21:44:24.347513  734306 system_pods.go:61] "storage-provisioner" [34231cba-ba97-4740-bce3-cf1d1f86db1a] Running
	I1109 21:44:24.347519  734306 system_pods.go:74] duration metric: took 18.515189ms to wait for pod list to return data ...
	I1109 21:44:24.347527  734306 node_conditions.go:102] verifying NodePressure condition ...
	I1109 21:44:24.353765  734306 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 21:44:24.353784  734306 node_conditions.go:123] node cpu capacity is 2
	I1109 21:44:24.353794  734306 node_conditions.go:105] duration metric: took 6.262825ms to run NodePressure ...
	I1109 21:44:24.353813  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 21:44:24.552272  734306 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1109 21:44:24.556983  734306 kubeadm.go:787] kubelet initialised
	I1109 21:44:24.556993  734306 kubeadm.go:788] duration metric: took 4.707562ms waiting for restarted kubelet to initialise ...
	I1109 21:44:24.557000  734306 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:44:24.563180  734306 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:26.581139  734306 pod_ready.go:92] pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:26.581152  734306 pod_ready.go:81] duration metric: took 2.017958729s waiting for pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:26.581161  734306 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:28.597756  734306 pod_ready.go:102] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"False"
	I1109 21:44:30.599373  734306 pod_ready.go:102] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"False"
	I1109 21:44:33.098487  734306 pod_ready.go:102] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"False"
	I1109 21:44:35.098791  734306 pod_ready.go:102] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"False"
	I1109 21:44:35.597708  734306 pod_ready.go:92] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:35.597718  734306 pod_ready.go:81] duration metric: took 9.016552234s waiting for pod "etcd-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:35.597730  734306 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.615173  734306 pod_ready.go:92] pod "kube-apiserver-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:36.615184  734306 pod_ready.go:81] duration metric: took 1.017447556s waiting for pod "kube-apiserver-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.615193  734306 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.620954  734306 pod_ready.go:92] pod "kube-controller-manager-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:36.620965  734306 pod_ready.go:81] duration metric: took 5.7655ms waiting for pod "kube-controller-manager-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.620974  734306 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mkncf" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.626161  734306 pod_ready.go:92] pod "kube-proxy-mkncf" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:36.626172  734306 pod_ready.go:81] duration metric: took 5.192457ms waiting for pod "kube-proxy-mkncf" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.626181  734306 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.797217  734306 pod_ready.go:92] pod "kube-scheduler-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:36.797228  734306 pod_ready.go:81] duration metric: took 171.04056ms waiting for pod "kube-scheduler-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:36.797240  734306 pod_ready.go:38] duration metric: took 12.240226342s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:44:36.797255  734306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 21:44:36.806060  734306 ops.go:34] apiserver oom_adj: -16
	I1109 21:44:36.806072  734306 kubeadm.go:640] restartCluster took 22.217446009s
	I1109 21:44:36.806079  734306 kubeadm.go:406] StartCluster complete in 22.29834579s
	I1109 21:44:36.806106  734306 settings.go:142] acquiring lock: {Name:mk717b4baf2280543b738622644195ea0d60d476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:44:36.806208  734306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:44:36.806898  734306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/kubeconfig: {Name:mk5701fd19491b0b49f183ef877286e38ea5f8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:44:36.807111  734306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 21:44:36.807387  734306 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:44:36.807523  734306 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1109 21:44:36.807608  734306 addons.go:69] Setting storage-provisioner=true in profile "functional-133528"
	I1109 21:44:36.807635  734306 addons.go:231] Setting addon storage-provisioner=true in "functional-133528"
	W1109 21:44:36.807640  734306 addons.go:240] addon storage-provisioner should already be in state true
	I1109 21:44:36.807706  734306 host.go:66] Checking if "functional-133528" exists ...
	I1109 21:44:36.808044  734306 addons.go:69] Setting default-storageclass=true in profile "functional-133528"
	I1109 21:44:36.808059  734306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-133528"
	I1109 21:44:36.808184  734306 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
	I1109 21:44:36.808398  734306 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
	I1109 21:44:36.816062  734306 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-133528" context rescaled to 1 replicas
	I1109 21:44:36.816090  734306 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:44:36.826241  734306 out.go:177] * Verifying Kubernetes components...
	I1109 21:44:36.831764  734306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:44:36.839134  734306 addons.go:231] Setting addon default-storageclass=true in "functional-133528"
	W1109 21:44:36.839144  734306 addons.go:240] addon default-storageclass should already be in state true
	I1109 21:44:36.839167  734306 host.go:66] Checking if "functional-133528" exists ...
	I1109 21:44:36.839626  734306 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
	I1109 21:44:36.867818  734306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:44:36.869359  734306 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:44:36.869368  734306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 21:44:36.869463  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:36.885311  734306 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 21:44:36.885323  734306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 21:44:36.885386  734306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
	I1109 21:44:36.899067  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:36.925395  734306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
	I1109 21:44:36.983833  734306 node_ready.go:35] waiting up to 6m0s for node "functional-133528" to be "Ready" ...
	I1109 21:44:36.983967  734306 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 21:44:36.996245  734306 node_ready.go:49] node "functional-133528" has status "Ready":"True"
	I1109 21:44:36.996256  734306 node_ready.go:38] duration metric: took 12.405446ms waiting for node "functional-133528" to be "Ready" ...
	I1109 21:44:36.996266  734306 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:44:37.036335  734306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:44:37.075840  734306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 21:44:37.201652  734306 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:37.507306  734306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 21:44:37.513798  734306 addons.go:502] enable addons completed in 706.264884ms: enabled=[storage-provisioner default-storageclass]
	I1109 21:44:37.595616  734306 pod_ready.go:92] pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:37.595627  734306 pod_ready.go:81] duration metric: took 393.960915ms waiting for pod "coredns-5dd5756b68-v5sb8" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:37.595636  734306 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:37.995615  734306 pod_ready.go:92] pod "etcd-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:37.995627  734306 pod_ready.go:81] duration metric: took 399.984989ms waiting for pod "etcd-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:37.995640  734306 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:38.396489  734306 pod_ready.go:92] pod "kube-apiserver-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:38.396506  734306 pod_ready.go:81] duration metric: took 400.854235ms waiting for pod "kube-apiserver-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:38.396517  734306 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:38.795863  734306 pod_ready.go:92] pod "kube-controller-manager-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:38.795875  734306 pod_ready.go:81] duration metric: took 399.351525ms waiting for pod "kube-controller-manager-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:38.795886  734306 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkncf" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:39.195849  734306 pod_ready.go:92] pod "kube-proxy-mkncf" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:39.195863  734306 pod_ready.go:81] duration metric: took 399.97109ms waiting for pod "kube-proxy-mkncf" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:39.195873  734306 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:39.595691  734306 pod_ready.go:92] pod "kube-scheduler-functional-133528" in "kube-system" namespace has status "Ready":"True"
	I1109 21:44:39.595702  734306 pod_ready.go:81] duration metric: took 399.818557ms waiting for pod "kube-scheduler-functional-133528" in "kube-system" namespace to be "Ready" ...
	I1109 21:44:39.595712  734306 pod_ready.go:38] duration metric: took 2.599437161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:44:39.595725  734306 api_server.go:52] waiting for apiserver process to appear ...
	I1109 21:44:39.595785  734306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:44:39.608542  734306 api_server.go:72] duration metric: took 2.792425404s to wait for apiserver process to appear ...
	I1109 21:44:39.608556  734306 api_server.go:88] waiting for apiserver healthz status ...
	I1109 21:44:39.608572  734306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 21:44:39.618126  734306 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1109 21:44:39.619648  734306 api_server.go:141] control plane version: v1.28.3
	I1109 21:44:39.619661  734306 api_server.go:131] duration metric: took 11.099444ms to wait for apiserver health ...
	I1109 21:44:39.619667  734306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 21:44:39.799373  734306 system_pods.go:59] 8 kube-system pods found
	I1109 21:44:39.799387  734306 system_pods.go:61] "coredns-5dd5756b68-v5sb8" [d17fe4a6-5f21-4dc8-adeb-df67b00b311d] Running
	I1109 21:44:39.799391  734306 system_pods.go:61] "etcd-functional-133528" [dcd780e1-b7d5-4113-89d9-b9a9c5cbad99] Running
	I1109 21:44:39.799397  734306 system_pods.go:61] "kindnet-dl9h4" [dc246843-542b-4d30-835e-3271c0bc77b9] Running
	I1109 21:44:39.799401  734306 system_pods.go:61] "kube-apiserver-functional-133528" [298e7c71-5e96-4310-aff0-1947ede1dd98] Running
	I1109 21:44:39.799406  734306 system_pods.go:61] "kube-controller-manager-functional-133528" [cd177459-459d-43d4-b888-3c0b1c204bc7] Running
	I1109 21:44:39.799409  734306 system_pods.go:61] "kube-proxy-mkncf" [b9248010-d001-480c-9955-fbde48cfc39c] Running
	I1109 21:44:39.799413  734306 system_pods.go:61] "kube-scheduler-functional-133528" [fbf27352-e261-443a-ad1f-59047caae401] Running
	I1109 21:44:39.799417  734306 system_pods.go:61] "storage-provisioner" [34231cba-ba97-4740-bce3-cf1d1f86db1a] Running
	I1109 21:44:39.799421  734306 system_pods.go:74] duration metric: took 179.749933ms to wait for pod list to return data ...
	I1109 21:44:39.799428  734306 default_sa.go:34] waiting for default service account to be created ...
	I1109 21:44:39.995936  734306 default_sa.go:45] found service account: "default"
	I1109 21:44:39.995950  734306 default_sa.go:55] duration metric: took 196.515948ms for default service account to be created ...
	I1109 21:44:39.995959  734306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 21:44:40.198821  734306 system_pods.go:86] 8 kube-system pods found
	I1109 21:44:40.198836  734306 system_pods.go:89] "coredns-5dd5756b68-v5sb8" [d17fe4a6-5f21-4dc8-adeb-df67b00b311d] Running
	I1109 21:44:40.198841  734306 system_pods.go:89] "etcd-functional-133528" [dcd780e1-b7d5-4113-89d9-b9a9c5cbad99] Running
	I1109 21:44:40.198845  734306 system_pods.go:89] "kindnet-dl9h4" [dc246843-542b-4d30-835e-3271c0bc77b9] Running
	I1109 21:44:40.198849  734306 system_pods.go:89] "kube-apiserver-functional-133528" [298e7c71-5e96-4310-aff0-1947ede1dd98] Running
	I1109 21:44:40.198858  734306 system_pods.go:89] "kube-controller-manager-functional-133528" [cd177459-459d-43d4-b888-3c0b1c204bc7] Running
	I1109 21:44:40.198862  734306 system_pods.go:89] "kube-proxy-mkncf" [b9248010-d001-480c-9955-fbde48cfc39c] Running
	I1109 21:44:40.198865  734306 system_pods.go:89] "kube-scheduler-functional-133528" [fbf27352-e261-443a-ad1f-59047caae401] Running
	I1109 21:44:40.198869  734306 system_pods.go:89] "storage-provisioner" [34231cba-ba97-4740-bce3-cf1d1f86db1a] Running
	I1109 21:44:40.198875  734306 system_pods.go:126] duration metric: took 202.910941ms to wait for k8s-apps to be running ...
	I1109 21:44:40.198882  734306 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 21:44:40.198943  734306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:44:40.213275  734306 system_svc.go:56] duration metric: took 14.382827ms WaitForService to wait for kubelet.
	I1109 21:44:40.213291  734306 kubeadm.go:581] duration metric: took 3.397182376s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 21:44:40.213309  734306 node_conditions.go:102] verifying NodePressure condition ...
	I1109 21:44:40.395742  734306 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 21:44:40.395756  734306 node_conditions.go:123] node cpu capacity is 2
	I1109 21:44:40.395766  734306 node_conditions.go:105] duration metric: took 182.452339ms to run NodePressure ...
	I1109 21:44:40.395775  734306 start.go:228] waiting for startup goroutines ...
	I1109 21:44:40.395781  734306 start.go:233] waiting for cluster config update ...
	I1109 21:44:40.395790  734306 start.go:242] writing updated cluster config ...
	I1109 21:44:40.396087  734306 ssh_runner.go:195] Run: rm -f paused
	I1109 21:44:40.462991  734306 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1109 21:44:40.465408  734306 out.go:177] * Done! kubectl is now configured to use "functional-133528" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 09 21:45:23 functional-133528 crio[4529]: time="2023-11-09 21:45:23.083219475Z" level=info msg="Image docker.io/nginx:alpine not found" id=98bbd8bd-ba1d-47af-b058-88d3221793fd name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:45:37 functional-133528 crio[4529]: time="2023-11-09 21:45:37.879344142Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=07f3ba54-b426-4231-a284-16005a586726 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:45:37 functional-133528 crio[4529]: time="2023-11-09 21:45:37.879583262Z" level=info msg="Image docker.io/nginx:alpine not found" id=07f3ba54-b426-4231-a284-16005a586726 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:45:52 functional-133528 crio[4529]: time="2023-11-09 21:45:52.692861978Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=e9ba7a07-7d6e-423f-8a1b-92e9eedf1e98 name=/runtime.v1.ImageService/PullImage
	Nov 09 21:45:52 functional-133528 crio[4529]: time="2023-11-09 21:45:52.694815975Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Nov 09 21:45:53 functional-133528 crio[4529]: time="2023-11-09 21:45:53.139423413Z" level=info msg="Checking image status: docker.io/nginx:latest" id=a423d513-b986-4f39-a8c3-da072a5ed826 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:45:53 functional-133528 crio[4529]: time="2023-11-09 21:45:53.139701064Z" level=info msg="Image docker.io/nginx:latest not found" id=a423d513-b986-4f39-a8c3-da072a5ed826 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:46:04 functional-133528 crio[4529]: time="2023-11-09 21:46:04.880258638Z" level=info msg="Checking image status: docker.io/nginx:latest" id=45a3435c-8991-4585-9743-6e68161e910a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:46:04 functional-133528 crio[4529]: time="2023-11-09 21:46:04.880482595Z" level=info msg="Image docker.io/nginx:latest not found" id=45a3435c-8991-4585-9743-6e68161e910a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:46:37 functional-133528 crio[4529]: time="2023-11-09 21:46:37.167287259Z" level=info msg="Pulling image: docker.io/nginx:latest" id=37853c2c-40cc-44f2-a0fa-9826ed08af85 name=/runtime.v1.ImageService/PullImage
	Nov 09 21:46:37 functional-133528 crio[4529]: time="2023-11-09 21:46:37.169337838Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 09 21:46:49 functional-133528 crio[4529]: time="2023-11-09 21:46:49.879861233Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=09c8e12e-46f8-4afe-a4a1-be53782e75d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:46:49 functional-133528 crio[4529]: time="2023-11-09 21:46:49.880087259Z" level=info msg="Image docker.io/nginx:alpine not found" id=09c8e12e-46f8-4afe-a4a1-be53782e75d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:04 functional-133528 crio[4529]: time="2023-11-09 21:47:04.879930893Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6f057eb0-83f5-4174-a336-dec1bc1f446e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:04 functional-133528 crio[4529]: time="2023-11-09 21:47:04.880169152Z" level=info msg="Image docker.io/nginx:alpine not found" id=6f057eb0-83f5-4174-a336-dec1bc1f446e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:07 functional-133528 crio[4529]: time="2023-11-09 21:47:07.448896792Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=d654d403-bf03-4032-a0f2-1628d2884057 name=/runtime.v1.ImageService/PullImage
	Nov 09 21:47:07 functional-133528 crio[4529]: time="2023-11-09 21:47:07.449911718Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Nov 09 21:47:21 functional-133528 crio[4529]: time="2023-11-09 21:47:21.879908178Z" level=info msg="Checking image status: docker.io/nginx:latest" id=21fe158c-b37c-4d55-aaa3-65cc680d0198 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:21 functional-133528 crio[4529]: time="2023-11-09 21:47:21.880143278Z" level=info msg="Image docker.io/nginx:latest not found" id=21fe158c-b37c-4d55-aaa3-65cc680d0198 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:36 functional-133528 crio[4529]: time="2023-11-09 21:47:36.880056693Z" level=info msg="Checking image status: docker.io/nginx:latest" id=00c7224d-a8f2-47b7-9724-84dc21136f6f name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:36 functional-133528 crio[4529]: time="2023-11-09 21:47:36.880279977Z" level=info msg="Image docker.io/nginx:latest not found" id=00c7224d-a8f2-47b7-9724-84dc21136f6f name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:37 functional-133528 crio[4529]: time="2023-11-09 21:47:37.727504049Z" level=info msg="Pulling image: docker.io/nginx:latest" id=9ca588f9-127e-43c1-b2ab-b8eb207c9203 name=/runtime.v1.ImageService/PullImage
	Nov 09 21:47:37 functional-133528 crio[4529]: time="2023-11-09 21:47:37.729465627Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 09 21:47:49 functional-133528 crio[4529]: time="2023-11-09 21:47:49.880093251Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=d7563ae9-5b10-44ce-8834-164447ce8078 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 21:47:49 functional-133528 crio[4529]: time="2023-11-09 21:47:49.880315379Z" level=info msg="Image docker.io/nginx:alpine not found" id=d7563ae9-5b10-44ce-8834-164447ce8078 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86cc0fe88c6c7       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   3 minutes ago       Running             coredns                   3                   5d36fadf7a6da       coredns-5dd5756b68-v5sb8
	30ea081737d0a       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd   3 minutes ago       Running             kube-proxy                3                   37ff9727474ee       kube-proxy-mkncf
	2116dde8e8b99       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   3 minutes ago       Running             kindnet-cni               3                   a796022c96694       kindnet-dl9h4
	96392a6620e44       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Running             storage-provisioner       4                   3efd3f07203da       storage-provisioner
	d10a621cbed06       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7   3 minutes ago       Running             kube-apiserver            0                   2e12a03c98cd3       kube-apiserver-functional-133528
	8395ce3fd3df4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   3 minutes ago       Running             etcd                      3                   8b74f8daa326a       etcd-functional-133528
	09ee8b8626878       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314   3 minutes ago       Running             kube-scheduler            3                   ea9eb0849f508       kube-scheduler-functional-133528
	2593a278478c4       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16   3 minutes ago       Running             kube-controller-manager   3                   2851bf48f59b1       kube-controller-manager-functional-133528
	2d22c48bdfe9c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   3 minutes ago       Exited              storage-provisioner       3                   3efd3f07203da       storage-provisioner
	5b78325e9454d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108   4 minutes ago       Exited              coredns                   2                   5d36fadf7a6da       coredns-5dd5756b68-v5sb8
	92191275672ca       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace   4 minutes ago       Exited              etcd                      2                   8b74f8daa326a       etcd-functional-133528
	c7f07eb1b7259       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd   4 minutes ago       Exited              kube-proxy                2                   37ff9727474ee       kube-proxy-mkncf
	b2c3fbc56334d       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314   4 minutes ago       Exited              kube-scheduler            2                   ea9eb0849f508       kube-scheduler-functional-133528
	9f5d00b5d83d3       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26   4 minutes ago       Exited              kindnet-cni               2                   a796022c96694       kindnet-dl9h4
	8b44216f7236c       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16   4 minutes ago       Exited              kube-controller-manager   2                   2851bf48f59b1       kube-controller-manager-functional-133528
	
	* 
	* ==> coredns [5b78325e9454dd1fe99ac139f454e03870bdba60b42287f12cb0e521e2c10a61] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33866 - 48399 "HINFO IN 7961319061928847753.8744593544221283174. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.037656071s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [86cc0fe88c6c78b08267583739f77e1c3d100c2bbba3daa3579ebbc77a801c82] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44845 - 14766 "HINFO IN 5860806339652303125.5423392488913318047. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021946689s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-133528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-133528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b
	                    minikube.k8s.io/name=functional-133528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_09T21_42_27_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 21:42:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-133528
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 21:47:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 21:44:22 +0000   Thu, 09 Nov 2023 21:42:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 21:44:22 +0000   Thu, 09 Nov 2023 21:42:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 21:44:22 +0000   Thu, 09 Nov 2023 21:42:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 21:44:22 +0000   Thu, 09 Nov 2023 21:43:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-133528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 634af2bc831149fea177a2ae01ac2a02
	  System UUID:                2adaacbd-435b-4b00-9211-c4b8a5557649
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-v5sb8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m20s
	  kube-system                 etcd-functional-133528                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m32s
	  kube-system                 kindnet-dl9h4                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m20s
	  kube-system                 kube-apiserver-functional-133528             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-controller-manager-functional-133528    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 kube-proxy-mkncf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-scheduler-functional-133528             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m17s                  kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 4m19s                  kube-proxy       
	  Normal   Starting                 5m40s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m40s (x8 over 5m40s)  kubelet          Node functional-133528 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m40s (x8 over 5m40s)  kubelet          Node functional-133528 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m40s (x8 over 5m40s)  kubelet          Node functional-133528 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m32s                  kubelet          Node functional-133528 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m32s                  kubelet          Node functional-133528 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m32s                  kubelet          Node functional-133528 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m32s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m21s                  node-controller  Node functional-133528 event: Registered Node functional-133528 in Controller
	  Normal   NodeReady                4m47s                  kubelet          Node functional-133528 status is now: NodeReady
	  Warning  ContainerGCFailed        4m32s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m6s                   node-controller  Node functional-133528 event: Registered Node functional-133528 in Controller
	  Normal   Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m41s (x8 over 3m42s)  kubelet          Node functional-133528 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m41s (x8 over 3m42s)  kubelet          Node functional-133528 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m41s (x8 over 3m42s)  kubelet          Node functional-133528 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m24s                  node-controller  Node functional-133528 event: Registered Node functional-133528 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001051] FS-Cache: O-key=[8] '495f3b0000000000'
	[  +0.000751] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000992] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000e55cb6e2
	[  +0.001113] FS-Cache: N-key=[8] '495f3b0000000000'
	[  +0.006180] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000966] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000000f262c50
	[  +0.001104] FS-Cache: O-key=[8] '495f3b0000000000'
	[  +0.000740] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000002567691d
	[  +0.001074] FS-Cache: N-key=[8] '495f3b0000000000'
	[  +2.382360] FS-Cache: Duplicate cookie detected
	[  +0.000732] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000968] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000003bb74389
	[  +0.001058] FS-Cache: O-key=[8] '485f3b0000000000'
	[  +0.000711] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000e55cb6e2
	[  +0.001078] FS-Cache: N-key=[8] '485f3b0000000000'
	[  +0.437507] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000006d162e6b
	[  +0.001059] FS-Cache: O-key=[8] '4e5f3b0000000000'
	[  +0.000716] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000004321a16c
	[  +0.001093] FS-Cache: N-key=[8] '4e5f3b0000000000'
	
	* 
	* ==> etcd [8395ce3fd3df4e3ded56c84e6e4c18f134b758b646ae1b8c77c1241ad200b9c4] <==
	* {"level":"info","ts":"2023-11-09T21:44:17.876746Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-09T21:44:17.876838Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-09T21:44:17.880555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-09T21:44:17.886564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-09T21:44:17.886738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T21:44:17.886798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T21:44:17.897343Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-09T21:44:17.898121Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-09T21:44:17.898194Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-09T21:44:17.898354Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-09T21:44:17.898821Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-09T21:44:19.767936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2023-11-09T21:44:19.76805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2023-11-09T21:44:19.768106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-09T21:44:19.768148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 5"}
	{"level":"info","ts":"2023-11-09T21:44:19.768185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 5"}
	{"level":"info","ts":"2023-11-09T21:44:19.768227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 5"}
	{"level":"info","ts":"2023-11-09T21:44:19.768267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 5"}
	{"level":"info","ts":"2023-11-09T21:44:19.774521Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-133528 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-09T21:44:19.778333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T21:44:19.779378Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-09T21:44:19.779824Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T21:44:19.786958Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-09T21:44:19.806332Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-09T21:44:19.806395Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [92191275672ca5d12dc396500279e21ee46ad99b723559b0c0fe6d55335aa03f] <==
	* {"level":"info","ts":"2023-11-09T21:43:35.288788Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-09T21:43:37.032464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-11-09T21:43:37.032605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-11-09T21:43:37.032669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-09T21:43:37.032727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-11-09T21:43:37.032764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-09T21:43:37.032808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-11-09T21:43:37.032855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-09T21:43:37.034644Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-133528 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-09T21:43:37.03473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T21:43:37.035732Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-09T21:43:37.038406Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T21:43:37.039343Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-09T21:43:37.039662Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-09T21:43:37.039685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-09T21:44:06.359977Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-09T21:44:06.36004Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-133528","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-11-09T21:44:06.360111Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-09T21:44:06.360575Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-09T21:44:06.400168Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-09T21:44:06.400357Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-09T21:44:06.400438Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-11-09T21:44:06.405654Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-09T21:44:06.405872Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-09T21:44:06.405911Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-133528","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  21:47:58 up  4:30,  0 users,  load average: 0.20, 0.62, 1.14
	Linux functional-133528 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [2116dde8e8b997cd756b9c9118e63de5efe573a0e0938b44bb604c31cfb23089] <==
	* I1109 21:45:53.761816       1 main.go:227] handling current node
	I1109 21:46:03.771980       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:03.772079       1 main.go:227] handling current node
	I1109 21:46:13.781978       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:13.782004       1 main.go:227] handling current node
	I1109 21:46:23.785891       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:23.785918       1 main.go:227] handling current node
	I1109 21:46:33.797433       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:33.797464       1 main.go:227] handling current node
	I1109 21:46:43.809810       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:43.809839       1 main.go:227] handling current node
	I1109 21:46:53.821605       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:46:53.821635       1 main.go:227] handling current node
	I1109 21:47:03.833657       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:03.833685       1 main.go:227] handling current node
	I1109 21:47:13.837652       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:13.837680       1 main.go:227] handling current node
	I1109 21:47:23.849323       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:23.849360       1 main.go:227] handling current node
	I1109 21:47:33.861779       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:33.861807       1 main.go:227] handling current node
	I1109 21:47:43.872041       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:43.872071       1 main.go:227] handling current node
	I1109 21:47:53.882537       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:47:53.882565       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [9f5d00b5d83d3bb69292d4629385eb463fc91ec1b6ef6e637accdd74d4898840] <==
	* I1109 21:43:35.004176       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1109 21:43:35.004482       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1109 21:43:35.004746       1 main.go:116] setting mtu 1500 for CNI 
	I1109 21:43:35.004812       1 main.go:146] kindnetd IP family: "ipv4"
	I1109 21:43:35.004851       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1109 21:43:39.171556       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:43:39.171593       1 main.go:227] handling current node
	I1109 21:43:49.189235       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:43:49.189292       1 main.go:227] handling current node
	I1109 21:43:59.201853       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:43:59.201883       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d10a621cbed0679169e2db22f506d5d98ee59ab3cfd8386593cb080b2b99b50e] <==
	* I1109 21:44:21.876486       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1109 21:44:22.014669       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 21:44:22.052928       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1109 21:44:22.052951       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1109 21:44:22.053567       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 21:44:22.053993       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 21:44:22.055323       1 shared_informer.go:318] Caches are synced for configmaps
	E1109 21:44:22.063562       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 21:44:22.064433       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 21:44:22.068530       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1109 21:44:22.078472       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 21:44:22.078506       1 aggregator.go:166] initial CRD sync complete...
	I1109 21:44:22.078514       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 21:44:22.078520       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 21:44:22.078548       1 cache.go:39] Caches are synced for autoregister controller
	I1109 21:44:22.758921       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 21:44:24.314356       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1109 21:44:24.468138       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 21:44:24.476765       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 21:44:24.536713       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 21:44:24.543604       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 21:44:40.811070       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 21:44:44.625994       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.132.245"}
	I1109 21:44:44.651731       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 21:44:51.450223       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.105.204"}
	
	* 
	* ==> kube-controller-manager [2593a278478c4f73492b1b5721d5db34e0bdaaeae6af601e12036aad018d73ca] <==
	* I1109 21:44:34.643179       1 shared_informer.go:318] Caches are synced for attach detach
	I1109 21:44:34.643191       1 shared_informer.go:318] Caches are synced for TTL
	I1109 21:44:34.646436       1 shared_informer.go:318] Caches are synced for service account
	I1109 21:44:34.647684       1 shared_informer.go:318] Caches are synced for persistent volume
	I1109 21:44:34.649299       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1109 21:44:34.651658       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1109 21:44:34.654850       1 shared_informer.go:318] Caches are synced for GC
	I1109 21:44:34.654970       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1109 21:44:34.655114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="93.816µs"
	I1109 21:44:34.655949       1 shared_informer.go:318] Caches are synced for PV protection
	I1109 21:44:34.657072       1 shared_informer.go:318] Caches are synced for stateful set
	I1109 21:44:34.671427       1 shared_informer.go:318] Caches are synced for deployment
	I1109 21:44:34.688891       1 shared_informer.go:318] Caches are synced for disruption
	I1109 21:44:34.743322       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1109 21:44:34.743326       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1109 21:44:34.744442       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1109 21:44:34.744465       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1109 21:44:34.767290       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 21:44:34.801002       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1109 21:44:34.836612       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 21:44:35.156627       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 21:44:35.156657       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 21:44:35.190891       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 21:44:56.072385       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I1109 21:44:56.072643       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	* 
	* ==> kube-controller-manager [8b44216f7236cfcbb54c5b02b59e8bbbb5140b98ab5aa32495ddfd1b3579d8e0] <==
	* I1109 21:43:52.146764       1 range_allocator.go:174] "Sending events to api server"
	I1109 21:43:52.146844       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1109 21:43:52.146876       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1109 21:43:52.146908       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1109 21:43:52.155366       1 shared_informer.go:318] Caches are synced for HPA
	I1109 21:43:52.158671       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1109 21:43:52.158768       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1109 21:43:52.158890       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1109 21:43:52.158979       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1109 21:43:52.165118       1 shared_informer.go:318] Caches are synced for TTL
	I1109 21:43:52.189353       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I1109 21:43:52.198107       1 shared_informer.go:318] Caches are synced for deployment
	I1109 21:43:52.217277       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1109 21:43:52.233065       1 shared_informer.go:318] Caches are synced for disruption
	I1109 21:43:52.237027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.661438ms"
	I1109 21:43:52.237196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.434µs"
	I1109 21:43:52.280718       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1109 21:43:52.340442       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 21:43:52.351422       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 21:43:52.678981       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 21:43:52.679015       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 21:43:52.685147       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 21:43:54.555050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.971µs"
	I1109 21:43:54.576529       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.928589ms"
	I1109 21:43:54.576608       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.49µs"
	
	* 
	* ==> kube-proxy [30ea081737d0af4bebfa7ed5e4b2ba63d5400eceac7b5eb5d9aeddaa006824f0] <==
	* I1109 21:44:23.477413       1 server_others.go:69] "Using iptables proxy"
	I1109 21:44:23.520239       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1109 21:44:23.555711       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 21:44:23.558272       1 server_others.go:152] "Using iptables Proxier"
	I1109 21:44:23.558403       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 21:44:23.558436       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 21:44:23.558592       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 21:44:23.558938       1 server.go:846] "Version info" version="v1.28.3"
	I1109 21:44:23.559171       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 21:44:23.560326       1 config.go:188] "Starting service config controller"
	I1109 21:44:23.560416       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 21:44:23.560462       1 config.go:97] "Starting endpoint slice config controller"
	I1109 21:44:23.560498       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 21:44:23.562175       1 config.go:315] "Starting node config controller"
	I1109 21:44:23.566536       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 21:44:23.664925       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 21:44:23.667175       1 shared_informer.go:318] Caches are synced for service config
	I1109 21:44:23.667520       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [c7f07eb1b7259be35d0d22c5c40df44aa58ba836cbff4b0385cd5073c4573365] <==
	* I1109 21:43:36.274508       1 server_others.go:69] "Using iptables proxy"
	I1109 21:43:39.169352       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1109 21:43:39.202698       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 21:43:39.205209       1 server_others.go:152] "Using iptables Proxier"
	I1109 21:43:39.205238       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 21:43:39.205246       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 21:43:39.205324       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 21:43:39.205548       1 server.go:846] "Version info" version="v1.28.3"
	I1109 21:43:39.205564       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 21:43:39.206269       1 config.go:188] "Starting service config controller"
	I1109 21:43:39.206497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 21:43:39.206529       1 config.go:97] "Starting endpoint slice config controller"
	I1109 21:43:39.206534       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 21:43:39.207036       1 config.go:315] "Starting node config controller"
	I1109 21:43:39.207052       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 21:43:39.310388       1 shared_informer.go:318] Caches are synced for node config
	I1109 21:43:39.310426       1 shared_informer.go:318] Caches are synced for service config
	I1109 21:43:39.310452       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [09ee8b8626878d5d1b3f9de22d875c4415cc48029fb084877b0ad2e7282e7ba5] <==
	* I1109 21:44:19.910806       1 serving.go:348] Generated self-signed cert in-memory
	W1109 21:44:21.983485       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 21:44:21.983603       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 21:44:21.983646       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 21:44:21.983683       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 21:44:22.023559       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1109 21:44:22.023661       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 21:44:22.025778       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1109 21:44:22.026414       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 21:44:22.026515       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 21:44:22.032880       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:44:22.133977       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [b2c3fbc56334ddd54ebf0d46f7d25a5376e570727c01e724f3dcce0860846529] <==
	* I1109 21:43:36.586402       1 serving.go:348] Generated self-signed cert in-memory
	W1109 21:43:39.067004       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 21:43:39.067032       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 21:43:39.067042       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 21:43:39.067049       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 21:43:39.134858       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1109 21:43:39.134896       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 21:43:39.137270       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 21:43:39.137337       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:43:39.138266       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1109 21:43:39.138336       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 21:43:39.238490       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:44:06.359398       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1109 21:44:06.359502       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1109 21:44:06.359529       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1109 21:44:06.359557       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Nov 09 21:47:07 functional-133528 kubelet[4798]: E1109 21:47:07.448210    4798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6209db21-1e31-45b6-819d-4c0322a1b61d"
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.008770    4798 manager.go:1106] Failed to create existing container: /crio-9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac: Error finding container 9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac: Status 404 returned error can't find the container with id 9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.009406    4798 manager.go:1106] Failed to create existing container: /crio-a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6: Error finding container a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6: Status 404 returned error can't find the container with id a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.009634    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269: Error finding container 37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269: Status 404 returned error can't find the container with id 37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.009893    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac: Error finding container 9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac: Status 404 returned error can't find the container with id 9bd28aa6eb7d545ccbbb4b828cebe8563f58fed70758efd39a3a290d031fe8ac
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.010106    4798 manager.go:1106] Failed to create existing container: /crio-8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956: Error finding container 8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956: Status 404 returned error can't find the container with id 8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.010355    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6: Error finding container a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6: Status 404 returned error can't find the container with id a796022c96694e6c945acdf3cf9588e4ac337b66a0ae33ea1d2244fb88c011a6
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.010532    4798 manager.go:1106] Failed to create existing container: /crio-37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269: Error finding container 37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269: Status 404 returned error can't find the container with id 37ff9727474ee82f0dddbccf65d201cfe3aa1682f1e9e6991b85661162de3269
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.010780    4798 manager.go:1106] Failed to create existing container: /crio-8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e: Error finding container 8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e: Status 404 returned error can't find the container with id 8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.011089    4798 manager.go:1106] Failed to create existing container: /crio-2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c: Error finding container 2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c: Status 404 returned error can't find the container with id 2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.011293    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b: Error finding container 3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b: Status 404 returned error can't find the container with id 3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.011542    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c: Error finding container 2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c: Status 404 returned error can't find the container with id 2851bf48f59b126730df629d106108c5f5bf011bc769961ec25a5d85bdb0436c
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.011761    4798 manager.go:1106] Failed to create existing container: /crio-5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8: Error finding container 5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8: Status 404 returned error can't find the container with id 5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.012004    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e: Error finding container 8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e: Status 404 returned error can't find the container with id 8c6527a40a3537e5e35022166abf33789be833020f6d1928957ca5248e46e60e
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.012296    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8: Error finding container 5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8: Status 404 returned error can't find the container with id 5d36fadf7a6da2a2d614a4642744fc8ba9f60170dbc19d01e1bc7b175dd6e3f8
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.012509    4798 manager.go:1106] Failed to create existing container: /crio-3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b: Error finding container 3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b: Status 404 returned error can't find the container with id 3efd3f07203da0c5e9cc4b1909549d3231a43db2111fe25d99ac02c28b381c7b
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.012951    4798 manager.go:1106] Failed to create existing container: /crio-ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8: Error finding container ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8: Status 404 returned error can't find the container with id ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.013481    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8: Error finding container ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8: Status 404 returned error can't find the container with id ea9eb0849f508dd896b72bb326b8155132584bea5aaa9e5a2aae19bc70b512c8
	Nov 09 21:47:17 functional-133528 kubelet[4798]: E1109 21:47:17.013714    4798 manager.go:1106] Failed to create existing container: /docker/200da875897b4e8a8d27a31cff09c62d09cd8278c883022004757bf0027bbd64/crio-8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956: Error finding container 8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956: Status 404 returned error can't find the container with id 8b74f8daa326a372b44dc3e7e3f38a62b366970c5394e79a097f60e857c2f956
	Nov 09 21:47:21 functional-133528 kubelet[4798]: E1109 21:47:21.880401    4798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6209db21-1e31-45b6-819d-4c0322a1b61d"
	Nov 09 21:47:37 functional-133528 kubelet[4798]: E1109 21:47:37.726796    4798 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 09 21:47:37 functional-133528 kubelet[4798]: E1109 21:47:37.726853    4798 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Nov 09 21:47:37 functional-133528 kubelet[4798]: E1109 21:47:37.727079    4798 kuberuntime_manager.go:1256] container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j4grn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-svc_default(8821920f-63a1-493a-8030-9f8027946df
2): ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:47:37 functional-133528 kubelet[4798]: E1109 21:47:37.727128    4798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="8821920f-63a1-493a-8030-9f8027946df2"
	Nov 09 21:47:49 functional-133528 kubelet[4798]: E1109 21:47:49.881002    4798 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="8821920f-63a1-493a-8030-9f8027946df2"
	
	* 
	* ==> storage-provisioner [2d22c48bdfe9cf62e18c1f681c590edd44e1d6b1c024a0386e1a8d12fa3001f2] <==
	* I1109 21:44:04.321272       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 21:44:04.334207       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 21:44:04.334300       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [96392a6620e444f9264d796a5dc62ab656baf0a1cf565da1e9861e153c832215] <==
	* I1109 21:44:23.289017       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 21:44:23.411648       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 21:44:23.411816       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 21:44:40.813890       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 21:44:40.816356       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ec377d3-6bfd-4253-af69-01451b36b8e9", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-133528_57fef0a7-e232-4314-a178-18c97d2e105f became leader
	I1109 21:44:40.816402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-133528_57fef0a7-e232-4314-a178-18c97d2e105f!
	I1109 21:44:40.916777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-133528_57fef0a7-e232-4314-a178-18c97d2e105f!
	I1109 21:44:56.074874       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1109 21:44:56.079046       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f52e8c2c-90ac-4088-9490-11f2bde06325", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1109 21:44:56.076066       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9214bd8b-4f77-4f38-944f-d11d5e058404 406 0 2023-11-09 21:42:39 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-11-09 21:42:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f52e8c2c-90ac-4088-9490-11f2bde06325 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f52e8c2c-90ac-4088-9490-11f2bde06325 726 0 2023-11-09 21:44:56 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-11-09 21:44:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-11-09 21:44:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1109 21:44:56.083109       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f52e8c2c-90ac-4088-9490-11f2bde06325" provisioned
	I1109 21:44:56.083182       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1109 21:44:56.083249       1 volume_store.go:212] Trying to save persistentvolume "pvc-f52e8c2c-90ac-4088-9490-11f2bde06325"
	I1109 21:44:56.098007       1 volume_store.go:219] persistentvolume "pvc-f52e8c2c-90ac-4088-9490-11f2bde06325" saved
	I1109 21:44:56.098300       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f52e8c2c-90ac-4088-9490-11f2bde06325", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f52e8c2c-90ac-4088-9490-11f2bde06325
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-133528 -n functional-133528
helpers_test.go:261: (dbg) Run:  kubectl --context functional-133528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-133528 describe pod nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-133528 describe pod nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-133528/192.168.49.2
	Start Time:       Thu, 09 Nov 2023 21:44:51 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4grn (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-j4grn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m8s                 default-scheduler  Successfully assigned default/nginx-svc to functional-133528
	  Warning  Failed     2m37s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     82s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    55s (x3 over 3m8s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     22s (x3 over 2m37s)  kubelet            Error: ErrImagePull
	  Warning  Failed     22s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    10s (x3 over 2m36s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     10s (x3 over 2m36s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-133528/192.168.49.2
	Start Time:       Thu, 09 Nov 2023 21:44:56 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqbxp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-pqbxp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m3s                default-scheduler  Successfully assigned default/sp-pod to functional-133528
	  Warning  Failed     52s (x2 over 2m7s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     52s (x2 over 2m7s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    38s (x2 over 2m6s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     38s (x2 over 2m6s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    23s (x3 over 3m3s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-133528 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8821920f-63a1-493a-8030-9f8027946df2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-133528 -n functional-133528
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2023-11-09 21:48:51.861727453 +0000 UTC m=+1256.603698305
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-133528 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-133528 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-133528/192.168.49.2
Start Time:       Thu, 09 Nov 2023 21:44:51 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j4grn (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-j4grn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-133528
Warning  Failed     3m29s                kubelet            Failed to pull image "docker.io/nginx:alpine": initializing source docker://nginx:alpine: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m14s                kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     74s (x3 over 3m29s)  kubelet            Error: ErrImagePull
Warning  Failed     74s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    36s (x5 over 3m28s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     36s (x5 over 3m28s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    24s (x4 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-133528 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-133528 logs nginx-svc -n default: exit status 1 (90.645616ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-133528 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.96s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-133528 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.104.105.204   10.104.105.204   80:30735/TCP   5m51s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (110.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-861900 addons enable ingress --alsologtostderr -v=5
E1109 21:54:50.728052  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:50.733337  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:50.743595  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:50.763854  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:50.804134  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:50.884462  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:51.045227  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:51.365853  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:52.005998  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:53.287020  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:54:55.847678  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:55:00.968531  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:55:11.209310  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:55:31.689511  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:56:12.649789  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:56:16.647457  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:57:34.569980  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 21:57:39.782593  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-861900 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m0.939136962s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 21:52:11.603185  745551 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:52:11.603862  745551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:52:11.603874  745551 out.go:309] Setting ErrFile to fd 2...
	I1109 21:52:11.603881  745551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:52:11.604171  745551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:52:11.604477  745551 mustload.go:65] Loading cluster: ingress-addon-legacy-861900
	I1109 21:52:11.604861  745551 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:52:11.604897  745551 addons.go:594] checking whether the cluster is paused
	I1109 21:52:11.605003  745551 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:52:11.605046  745551 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:52:11.605617  745551 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:52:11.624210  745551 ssh_runner.go:195] Run: systemctl --version
	I1109 21:52:11.624270  745551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:52:11.642942  745551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:52:11.740022  745551 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 21:52:11.740098  745551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:52:11.787306  745551 cri.go:89] found id: "8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41"
	I1109 21:52:11.787328  745551 cri.go:89] found id: "2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428"
	I1109 21:52:11.787334  745551 cri.go:89] found id: "12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87"
	I1109 21:52:11.787339  745551 cri.go:89] found id: "6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1"
	I1109 21:52:11.787343  745551 cri.go:89] found id: "4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9"
	I1109 21:52:11.787347  745551 cri.go:89] found id: "89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9"
	I1109 21:52:11.787351  745551 cri.go:89] found id: "7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec"
	I1109 21:52:11.787355  745551 cri.go:89] found id: "e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72"
	I1109 21:52:11.787360  745551 cri.go:89] found id: ""
	I1109 21:52:11.787418  745551 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 21:52:11.818008  745551 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87","pid":2107,"status":"running","bundle":"/run/containers/storage/overlay-containers/12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87/userdata","rootfs":"/var/lib/containers/storage/overlay/b9bac87af50abdbe4c7da064139b9418b942f09024e6ac75521d697a1e49f060/merged","created":"2023-11-09T21:51:57.530854814Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a18dea0b","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a18dea0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:57.476918975Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-qmz79\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5c7f9d10-cffa-44a4-ab40-247ae020d804\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-qmz79_5c7f9d10-cffa-44a4-ab40-247ae020d804/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9bac87af50abdbe4c7da064139b9418b942f09024e6ac75521d697a1e49f060/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-qmz79_kube-system_5c7f9d10-cffa-44a4-ab40-247ae020d804_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2c2e4cab2336494ff897e36ca61468272b5301cf661ed0a0ee1733df32c64b9a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2c2e4cab2336494ff897e36ca61468272b5301cf661ed0a0ee1733df32c64b9a","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-qmz79_kube-system_5c7f9d10-cffa-44a4-ab40-247ae020d804_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5c7f9d10-cffa-44a4-ab40-247ae020d804/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5c7f9d10-cffa-44a4-ab40-247ae020d804/containers/kindnet-cni/90bda928\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/5c7f9d10-cffa-44a4-ab40-247ae020d804/volumes/kubernetes.io~secret/kindnet-token-4tpqg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-qmz79","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5c7f9
d10-cffa-44a4-ab40-247ae020d804","kubernetes.io/config.seen":"2023-11-09T21:51:55.005687639Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428","pid":2223,"status":"running","bundle":"/run/containers/storage/overlay-containers/2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428/userdata","rootfs":"/var/lib/containers/storage/overlay/5f1f66a4fc656715335f6d17f46e58f6005ce770e1a8c505ee6ff074b8ac07e4/merged","created":"2023-11-09T21:52:06.63316629Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"eec3246c","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/term
ination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"eec3246c\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:52:06.586632474Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a
24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-xvlpj\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"21a49005-d70f-4ed3-b4ee-c152858ec6bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-66bff467f8-xvlpj_21a49005-d70f-4ed3-b4ee-c152858ec6bb/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5f1f66a4fc656715335f6d17f46e58f6005ce770e1a8c505ee6ff074b8ac07e4/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-xvlpj_kube-system_21a49005-d70f-4ed3-b4ee-c152858ec6bb_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c08abe0554ec62561fa9cea0bed54df79c3f9e8dbff874d5a96b3d940704db35/userdata/resolv.co
nf","io.kubernetes.cri-o.SandboxID":"c08abe0554ec62561fa9cea0bed54df79c3f9e8dbff874d5a96b3d940704db35","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-xvlpj_kube-system_21a49005-d70f-4ed3-b4ee-c152858ec6bb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/21a49005-d70f-4ed3-b4ee-c152858ec6bb/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/21a49005-d70f-4ed3-b4ee-c152858ec6bb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/21a49005-d70f-4ed3-b4ee-c152858ec6bb/containers/coredns/f74c7f39\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":fal
se},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/21a49005-d70f-4ed3-b4ee-c152858ec6bb/volumes/kubernetes.io~secret/coredns-token-dn4qn\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-xvlpj","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"21a49005-d70f-4ed3-b4ee-c152858ec6bb","kubernetes.io/config.seen":"2023-11-09T21:52:05.915673240Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9","pid":1508,"status":"running","bundle":"/run/containers/storage/overlay-containers/4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9/userdata","rootfs":"/var/lib/containers/storage/overlay/42e74f491a48523820fc67dcdadc15f37ab189824936374a619efdfc6d83249b/merged","created":"2023-11-09T21:51:30.473927418Z","annotations":{"io.c
ontainer.manager":"cri-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:30.40980049Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f370
15706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-861900\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-861900_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/42e74f491a48523820fc67dcdadc15f37ab189824936374a619efdfc6d83249b/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-861900_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4b8298eaa7ed3d9791a035e7e11896ea247b4bb129e37961c63500dcfa98fdb8/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":
"4b8298eaa7ed3d9791a035e7e11896ea247b4bb129e37961c63500dcfa98fdb8","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-861900_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/2d491d39\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-861900","io
.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-11-09T21:51:25.903338505Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1","pid":1974,"status":"running","bundle":"/run/containers/storage/overlay-containers/6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1/userdata","rootfs":"/var/lib/containers/storage/overlay/d59efe8f948f6cd148a3c2595df4abf0233f474d09cded955da22fdd8bf58e8a/merged","created":"2023-11-09T21:51:55.848448665Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dfb54ddb","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminat
ionMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"dfb54ddb\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:55.66317743Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageRef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-hzpwp\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9ef89c
7b-9e45-4303-a315-31aa5a71b12a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-hzpwp_9ef89c7b-9e45-4303-a315-31aa5a71b12a/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d59efe8f948f6cd148a3c2595df4abf0233f474d09cded955da22fdd8bf58e8a/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-hzpwp_kube-system_9ef89c7b-9e45-4303-a315-31aa5a71b12a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/2e40e19b9b394aaabae071043c488ade147a7156f6e862fbbda7ff881a9ca34b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"2e40e19b9b394aaabae071043c488ade147a7156f6e862fbbda7ff881a9ca34b","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-hzpwp_kube-system_9ef89c7b-9e45-4303-a315-31aa5a71b12a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.
Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9ef89c7b-9e45-4303-a315-31aa5a71b12a/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9ef89c7b-9e45-4303-a315-31aa5a71b12a/containers/kube-proxy/b4a4e908\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/9ef89c7b-9e45-4303-a315-31aa5a71b12a/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9e
f89c7b-9e45-4303-a315-31aa5a71b12a/volumes/kubernetes.io~secret/kube-proxy-token-c4pzq\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-hzpwp","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9ef89c7b-9e45-4303-a315-31aa5a71b12a","kubernetes.io/config.seen":"2023-11-09T21:51:54.994101503Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec","pid":1458,"status":"running","bundle":"/run/containers/storage/overlay-containers/7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec/userdata","rootfs":"/var/lib/containers/storage/overlay/89b8314180cb65b3d6c48f5514b4097dd56813795c938b418df65197b12c0079/merged","created":"2023-11-09T21:51:30.422175391Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-man
ager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:30.334307314Z","io.kubernetes.cri-o.Image":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kuber
netes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-861900\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-861900_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/89b8314180cb65b3d6c48f5514b4097dd56813795c938b418df65197b12c0079/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-861900_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/61596e31e7a392ed636c6b249a3dd73fe2cf27f75c77b933750fa04868b7b9b9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"61596e31e7a392ed636c6b249a3dd73fe2cf
27f75c77b933750fa04868b7b9b9","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-861900_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/ca1948b3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"seli
nux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-861900","io.kubernetes.pod.namespace":"kube-system",
"io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.seen":"2023-11-09T21:51:25.901663672Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9","pid":1493,"status":"running","bundle":"/run/containers/storage/overlay-containers/89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9/userdata","rootfs":"/var/lib/containers/storage/overlay/f61848699b3801c4b9bccdf6061bd9aa73e9a93eb3a3d5228be0efd1f4f60ebb/merged","created":"2023-11-09T21:51:30.489164618Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"dd7c40e4","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.A
nnotations":"{\"io.kubernetes.container.hash\":\"dd7c40e4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:30.384481746Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-861900\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ceaa7110459fc5b5d36ea55dc5b4945f\"}","io.kubernet
es.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-861900_ceaa7110459fc5b5d36ea55dc5b4945f/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f61848699b3801c4b9bccdf6061bd9aa73e9a93eb3a3d5228be0efd1f4f60ebb/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-861900_kube-system_ceaa7110459fc5b5d36ea55dc5b4945f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f9b15b2de5254c26cae9f14d0efa855bf52ac282586ae4bb850c51f4648ce26d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f9b15b2de5254c26cae9f14d0efa855bf52ac282586ae4bb850c51f4648ce26d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-861900_kube-system_ceaa7110459fc5b5d36ea55dc5b4945f_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\
":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ceaa7110459fc5b5d36ea55dc5b4945f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ceaa7110459fc5b5d36ea55dc5b4945f/containers/etcd/b1d7b0de\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-861900","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ceaa7110459fc5b5d36ea55dc5b4945f","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"ceaa7110459fc5b5
d36ea55dc5b4945f","kubernetes.io/config.seen":"2023-11-09T21:51:25.904594845Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41","pid":2273,"status":"running","bundle":"/run/containers/storage/overlay-containers/8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41/userdata","rootfs":"/var/lib/containers/storage/overlay/219ee44657b2709c5dadeb5b6adbff27d3dd88f995a62d69d34f9c5e88f0807e/merged","created":"2023-11-09T21:52:08.579431807Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2a4244f2","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2a4244f2\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.term
inationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:52:08.533578743Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d1a286b9-e693-4d7c-88d0-ab36ed6c87a8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_
d1a286b9-e693-4d7c-88d0-ab36ed6c87a8/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/219ee44657b2709c5dadeb5b6adbff27d3dd88f995a62d69d34f9c5e88f0807e/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_d1a286b9-e693-4d7c-88d0-ab36ed6c87a8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/37d1ea607b8b5bb461983c38d562a93f774f4bc6813db0538d602b37e193d215/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"37d1ea607b8b5bb461983c38d562a93f774f4bc6813db0538d602b37e193d215","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_d1a286b9-e693-4d7c-88d0-ab36ed6c87a8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":
false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d1a286b9-e693-4d7c-88d0-ab36ed6c87a8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d1a286b9-e693-4d7c-88d0-ab36ed6c87a8/containers/storage-provisioner/98cced63\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d1a286b9-e693-4d7c-88d0-ab36ed6c87a8/volumes/kubernetes.io~secret/storage-provisioner-token-dvc9j\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d1a286b9-e693-4d7c-88d0-ab36ed6c87a8","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"
Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-11-09T21:52:05.921205632Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72","pid":1429,"status":"running","bundle":"/run/containers/storage/overlay-containers/e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72/userdata","rootfs":"/var/lib/contai
ners/storage/overlay/fe8cbc4e2aaff8193d61d9e5bd1d403ea98f592fffc8b26f9cfc5ed2a7371709/merged","created":"2023-11-09T21:51:30.344417723Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-09T21:51:30.260437031Z","io.kubernetes.cri-o.Image":"2694cf044d66591c
37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-861900\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-861900_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fe8cbc4e2aaff8193d61d9e5bd1d403ea98f592fffc8b26f9cfc5ed2a7371709/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-861900_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run
/containers/storage/overlay-containers/9c199d47751a8a0c7c77aceb285510a0ef7d42a38879466dd8f5b5689763231b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"9c199d47751a8a0c7c77aceb285510a0ef7d42a38879466dd8f5b5689763231b","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-861900_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/b56c348b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\
"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-861900","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40
af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-11-09T21:51:25.901127461Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I1109 21:52:11.818678  745551 cri.go:126] list returned 8 containers
	I1109 21:52:11.818694  745551 cri.go:129] container: {ID:12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87 Status:running}
	I1109 21:52:11.818708  745551 cri.go:135] skipping {12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87 running}: state = "running", want "paused"
	I1109 21:52:11.818718  745551 cri.go:129] container: {ID:2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428 Status:running}
	I1109 21:52:11.818730  745551 cri.go:135] skipping {2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428 running}: state = "running", want "paused"
	I1109 21:52:11.818740  745551 cri.go:129] container: {ID:4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9 Status:running}
	I1109 21:52:11.818747  745551 cri.go:135] skipping {4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9 running}: state = "running", want "paused"
	I1109 21:52:11.818754  745551 cri.go:129] container: {ID:6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1 Status:running}
	I1109 21:52:11.818760  745551 cri.go:135] skipping {6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1 running}: state = "running", want "paused"
	I1109 21:52:11.818770  745551 cri.go:129] container: {ID:7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec Status:running}
	I1109 21:52:11.818777  745551 cri.go:135] skipping {7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec running}: state = "running", want "paused"
	I1109 21:52:11.818789  745551 cri.go:129] container: {ID:89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9 Status:running}
	I1109 21:52:11.818796  745551 cri.go:135] skipping {89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9 running}: state = "running", want "paused"
	I1109 21:52:11.818805  745551 cri.go:129] container: {ID:8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41 Status:running}
	I1109 21:52:11.818816  745551 cri.go:135] skipping {8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41 running}: state = "running", want "paused"
	I1109 21:52:11.818826  745551 cri.go:129] container: {ID:e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72 Status:running}
	I1109 21:52:11.818833  745551 cri.go:135] skipping {e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72 running}: state = "running", want "paused"
	I1109 21:52:11.822040  745551 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1109 21:52:11.824690  745551 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:52:11.824709  745551 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-861900"
	I1109 21:52:11.824717  745551 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-861900"
	I1109 21:52:11.824755  745551 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:52:11.825175  745551 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:52:11.844810  745551 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1109 21:52:11.847025  745551 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1109 21:52:11.849680  745551 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1109 21:52:11.851966  745551 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 21:52:11.851987  745551 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1109 21:52:11.852057  745551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:52:11.869343  745551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:52:11.979625  745551 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 21:52:12.435858  745551 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-861900"
	I1109 21:52:12.438180  745551 out.go:177] * Verifying ingress addon...
	I1109 21:52:12.442595  745551 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:52:12.443337  745551 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 21:52:12.443759  745551 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 21:52:12.466203  745551 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 21:52:12.466272  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:12.472353  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:12.976681  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:13.477055  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:13.976389  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:14.476833  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:14.977106  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:15.476555  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:15.977252  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:16.476603  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:16.977346  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:17.476518  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:17.976764  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:18.477177  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:18.976339  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:19.476222  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:19.976280  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:20.476564  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:20.977266  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:21.476654  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:21.976121  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:22.476378  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:22.976619  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:23.477202  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:23.976411  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:24.476798  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:24.977322  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:25.476458  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:25.976822  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:26.476957  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:26.976166  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:27.476139  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:27.976706  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:28.476967  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:28.976938  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:29.476263  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:29.976624  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:30.477014  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:30.976361  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:31.476763  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:31.976294  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:32.477084  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:32.977291  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:33.476555  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:33.976935  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:34.476304  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:34.976626  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:35.476931  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:35.976350  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:36.476643  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:36.977186  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:37.480681  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:37.977081  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:38.476373  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:38.976794  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:39.476544  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:39.976821  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:40.477105  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:40.976224  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:41.476785  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:41.976412  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:42.477046  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:42.976497  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:43.476718  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:43.976952  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:44.476370  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:44.976777  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:45.477220  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:45.976160  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:46.476593  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:46.976846  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:47.477416  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:47.976990  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:48.476318  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:48.976577  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:49.477008  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:49.976305  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:50.476669  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:50.977052  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:51.476324  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:51.976898  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:52.476562  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:52.976763  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:53.477217  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:53.976503  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:54.476622  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:54.976817  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:55.477411  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:55.976618  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:56.477380  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:56.977061  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:57.476495  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:57.976975  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:58.477244  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:58.976427  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:59.477100  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:52:59.976438  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:00.476917  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:00.976199  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:01.476554  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:01.977446  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:02.476333  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:02.976870  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:03.476192  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:03.976511  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:04.477017  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:04.976304  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:05.476486  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:05.976951  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:06.477174  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:06.976593  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:07.476928  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:07.976150  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:08.476307  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:08.977011  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:09.476297  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:09.976357  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:10.476754  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:10.977512  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:11.477063  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:11.977258  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:12.476453  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:12.976747  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:13.476995  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:13.976303  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:14.476580  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:14.977036  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:15.477204  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:15.976369  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:16.477188  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:16.976920  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:17.476243  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:17.976071  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:18.476404  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:18.976684  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:19.476990  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:19.976758  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:20.476336  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:20.976823  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:21.477176  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:21.976723  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:22.477279  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:22.976512  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:23.476989  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:23.976603  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:24.476963  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:24.976180  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:25.476375  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:25.976657  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:26.476995  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:26.976287  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:27.476192  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:27.976318  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:28.476536  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:28.977188  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:29.476495  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:29.976772  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:30.477428  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:30.977058  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:31.476512  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:31.977306  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:32.476720  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:32.976832  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:33.477237  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:33.976275  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:34.476374  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:34.976533  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:35.476885  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:35.976627  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:36.477116  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:36.976352  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:37.476155  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:37.976700  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:38.477045  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:38.976738  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:39.477145  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:39.976301  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:40.476400  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:40.976601  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:41.477166  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:41.977538  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:42.477082  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:42.979051  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:43.476568  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:43.980356  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:44.476353  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:44.976228  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:45.476909  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:45.977220  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:46.476512  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:46.976981  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:47.476460  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:47.976096  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:48.476286  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:48.976152  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:49.476145  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:49.976376  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:50.476266  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:50.976111  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:51.476355  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:51.976999  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:52.477054  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:52.976359  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:53.476948  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:53.976287  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:54.476448  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:54.976727  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:55.476998  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:55.976142  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:56.476607  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:56.976915  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:57.476358  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:57.976681  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:58.476775  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:58.977175  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:59.476651  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:53:59.977914  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:00.476121  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:00.976207  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:01.476473  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:01.976911  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:02.476496  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:02.976836  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:03.476726  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:03.977375  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:04.476418  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:04.976801  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:05.477622  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:05.976864  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:06.476401  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:06.976770  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:07.477299  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:07.976365  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:08.476676  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:08.977219  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:09.476156  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:09.976647  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:10.477005  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:10.976361  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:11.476594  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:11.977231  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:12.477081  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:12.977013  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:13.476369  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:13.976731  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:14.477250  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:14.976574  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:15.476793  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:15.976916  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:16.476110  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:16.976894  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:17.476127  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:17.976911  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:18.476174  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:18.982522  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:19.477240  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:19.976431  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:20.476625  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:20.977415  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:21.476910  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:21.976280  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:22.476803  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:22.977222  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:23.476696  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:23.976979  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:24.476202  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:24.976346  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:25.476689  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:25.977090  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:26.476500  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:26.976968  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:27.476069  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:27.976336  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:28.476830  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:28.977024  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:29.476298  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:29.976281  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:30.476263  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:30.976288  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:31.476642  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:31.976617  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:32.477194  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:32.976350  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:33.476698  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:33.977189  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:34.476283  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:34.976189  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:35.476515  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:35.976947  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:36.476487  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:36.977612  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:37.477380  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:37.977155  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:38.476432  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:38.976806  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:39.477085  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:39.976388  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:40.476560  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:40.977075  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:41.476411  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:41.977693  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:42.477490  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:42.976858  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:43.476978  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:43.977179  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:44.476155  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:44.976457  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:45.476793  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:45.977234  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:46.476566  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:46.977143  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:47.476502  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:47.976728  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:48.477432  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:48.976707  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:49.477247  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:49.976874  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:50.476463  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:50.976904  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:51.477189  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:51.976752  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:52.477154  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:52.976488  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:53.476764  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:53.977033  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:54.476407  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:54.976670  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:55.477016  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:55.976209  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:56.476086  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:56.976237  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:57.476340  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:57.976181  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:58.476144  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:58.976444  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:59.477069  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:54:59.976353  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:00.476715  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:00.977006  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:01.476306  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:01.976725  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:02.477212  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:02.976413  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:03.476782  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:03.977010  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:04.476294  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:04.976705  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:05.477267  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:05.976783  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:06.477489  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:06.976923  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:07.477046  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:07.976239  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:08.476031  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:08.976136  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:09.476450  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:09.976895  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:10.476426  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:10.976836  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:11.477239  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:11.976961  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:12.476291  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:12.976137  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:13.476468  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:13.977560  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:14.476760  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:14.976932  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:15.476298  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:15.976551  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:16.476816  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:16.976070  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:17.476312  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:17.976183  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:18.477009  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:18.976680  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:19.476858  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:19.977103  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:20.476503  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:20.977188  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:21.476644  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:21.976900  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:22.477262  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:22.976192  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:23.476593  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:23.976810  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:24.477025  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:24.976488  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:25.476916  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:25.976139  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:26.476258  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:26.976527  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:27.476863  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:27.977082  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:28.476409  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:28.976877  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:29.476140  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:29.976273  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:30.476887  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:30.976044  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:31.476115  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:31.976466  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:32.476873  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:32.977599  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:33.476829  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:33.976237  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:34.476270  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:34.976555  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:35.477172  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:35.976405  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:36.476665  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:36.977092  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:37.476142  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:37.976154  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:38.477048  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:38.976400  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:39.476528  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:39.978347  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:40.476363  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:40.976197  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:41.476797  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:41.976212  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:42.476713  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:42.976988  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:43.476431  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:43.976793  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:44.477286  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:44.976179  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:45.476519  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:45.976749  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:46.477142  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:46.976448  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:47.476699  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:47.976921  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:48.476314  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:48.976543  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:49.476796  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:49.977016  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:50.476275  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:50.976293  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:51.477399  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:51.976947  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:52.478460  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:52.976296  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:53.476293  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:53.976135  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:54.476195  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:54.976187  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:55.476283  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:55.976279  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:56.476332  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:56.976600  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:57.476578  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:57.976878  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:58.477379  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:58.976830  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:59.477464  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:55:59.977022  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:00.476325  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:00.976566  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:01.476890  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:01.976512  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:02.477344  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:02.976290  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:03.476253  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:03.976541  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:04.477111  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:04.976056  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:05.476255  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:05.976508  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:06.476748  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:06.976902  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:07.476059  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:07.976343  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:08.476201  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:08.976319  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:09.476656  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:09.976992  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:10.476238  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:10.976214  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:11.476477  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:11.977159  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:12.476419  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:12.977086  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:13.476528  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:13.976700  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:14.477249  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:14.976612  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:15.476590  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:15.976716  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:16.476837  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:16.976003  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:17.476276  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:17.976393  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:18.476705  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:18.976891  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:19.476141  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:19.977019  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:20.476324  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:20.976630  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:21.476955  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:21.976301  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:22.476282  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:22.976148  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:23.476465  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:23.978138  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:24.476116  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:24.976025  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:25.476135  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:25.977066  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:26.476353  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:26.976752  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:27.479178  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:27.976636  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:28.476963  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:28.976227  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:29.476006  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:29.976185  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:30.476174  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:30.976101  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:31.476316  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:31.976855  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:32.476136  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:32.979442  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:33.476690  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:33.977884  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:34.475974  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:34.976890  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:35.477193  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:35.976735  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:36.476989  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:36.976416  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:37.476989  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:37.976477  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:38.477326  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:38.976641  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:39.476957  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:39.976478  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:40.476548  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:40.976797  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:41.477156  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:41.976700  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:42.477093  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:42.976331  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:43.476524  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:43.976610  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:44.476744  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:44.976975  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:45.476572  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:45.977114  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:46.476358  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:46.976655  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:47.477181  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:47.976459  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:48.476542  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:48.976825  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:49.476059  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:49.979590  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:50.476863  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:50.976178  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:51.476458  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:51.977247  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:52.476718  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:52.977275  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:53.476419  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:53.976536  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:54.476736  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:54.977339  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:55.476184  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:55.976540  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:56.477131  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:56.976250  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:57.476398  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:57.976522  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:58.476631  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:58.977013  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:59.476524  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:56:59.977064  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:00.476225  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:00.976254  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:01.476380  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:01.977491  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:02.477068  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:02.976197  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:03.476079  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:03.976485  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:04.476886  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:04.975995  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:05.476709  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:05.977095  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:06.476455  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:06.976996  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:07.476166  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:07.976345  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:08.476555  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:08.977051  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:09.476610  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:09.977832  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:10.477363  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:10.976616  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:11.477266  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:11.977255  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:12.476627  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:12.977173  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:13.476113  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:13.976250  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:14.476251  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:14.976871  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:15.476323  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:15.976476  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:16.476344  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:16.976686  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:17.476991  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:17.976197  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:18.476417  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:18.976797  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:19.476111  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:19.976377  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:20.476111  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:20.976419  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:21.476631  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:21.977004  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:22.476333  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:22.976504  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:23.476844  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:23.977031  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:24.476634  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:24.976939  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:25.476242  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:25.976448  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:26.476437  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:26.977241  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:27.482946  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:27.976658  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:28.476866  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:28.977233  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:29.476699  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:29.976988  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:30.476302  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:30.976144  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:31.476514  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:31.977418  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:32.477183  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:32.976675  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:33.477231  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:33.976483  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:34.476697  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:34.977048  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:35.476213  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:35.976399  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:36.476652  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:36.977639  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:37.477019  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:37.976458  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:38.477132  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:38.976528  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:39.477065  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:39.976287  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:40.476166  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:40.976381  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:41.477195  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:41.976840  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:42.476174  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:42.976615  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:43.477005  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:43.976226  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:44.476177  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:44.976345  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:45.476289  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:45.976486  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:46.476637  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:46.977536  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:47.476762  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:47.977054  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:48.475996  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:48.976229  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:49.476594  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:49.977080  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:50.476267  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:50.976373  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:51.476689  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:51.976239  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:52.476773  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:52.977154  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:53.476201  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:53.976441  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:54.476565  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:54.976741  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:55.477042  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:55.976982  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:56.477327  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:56.977140  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:57.476241  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:57.976488  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:58.477416  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:58.976408  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:59.476574  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:57:59.976902  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:00.476123  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:00.976394  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:01.476908  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:01.976313  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:02.476843  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:02.976939  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:03.477111  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:03.976258  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:04.476266  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:04.976428  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:05.476547  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:05.976927  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:06.476030  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:06.976201  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:07.476428  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:07.976505  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:08.476721  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:08.977049  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:09.476824  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:09.977055  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:10.476367  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:10.976789  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:11.477172  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:11.976814  745551 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 21:58:12.444495  745551 kapi.go:107] duration metric: took 6m0.000726237s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 21:58:12.446829  745551 out.go:177] 
	W1109 21:58:12.448686  745551 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1109 21:58:12.448705  745551 out.go:239] * 
	* 
	W1109 21:58:12.454894  745551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 21:58:12.456959  745551 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-861900
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-861900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110",
	        "Created": "2023-11-09T21:51:05.825345896Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743031,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T21:51:06.164313049Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/hostname",
	        "HostsPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/hosts",
	        "LogPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110-json.log",
	        "Name": "/ingress-addon-legacy-861900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-861900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-861900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4-init/diff:/var/lib/docker/overlay2/7d8c4fc646533218e970cbbc2fae53265551a122428b3ce7f5ec8807d1cc9c68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-861900",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-861900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-861900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-861900",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-861900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "04d42e0c1ae75de0bb2d9545510cd033f55ec411f420237303f4b8d438827aa5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33690"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33689"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33686"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33688"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33687"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/04d42e0c1ae7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-861900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "585037b4c122",
	                        "ingress-addon-legacy-861900"
	                    ],
	                    "NetworkID": "7014a50d33f8d4bd752ad2c32fcaf50e13607d4948bf7731d462ff2e96b450f9",
	                    "EndpointID": "bb5f9ab5286632afa6dc31ea8eef8ae45a233daec0e1910488c382c794428c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-861900 -n ingress-addon-legacy-861900
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-861900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-861900 logs -n 25: (1.399849413s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-133528 image rm                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-133528               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528 image load                                           | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528 image save --daemon                                  | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-133528               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/713573.pem                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /usr/share/ca-certificates/713573.pem                                  |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/7135732.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /usr/share/ca-certificates/7135732.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/test/nested/copy/713573/hosts                                     |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh pgrep                                            | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-133528 image build -t                                       | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | localhost/my-image:functional-133528                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-133528                                                   | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| start          | -p ingress-addon-legacy-861900                                         | ingress-addon-legacy-861900 | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:52 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-861900                                            | ingress-addon-legacy-861900 | jenkins | v1.32.0 | 09 Nov 23 21:52 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:50:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:50:45.418463  742569 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:50:45.418591  742569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:50:45.418601  742569 out.go:309] Setting ErrFile to fd 2...
	I1109 21:50:45.418607  742569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:50:45.418864  742569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:50:45.419281  742569 out.go:303] Setting JSON to false
	I1109 21:50:45.420286  742569 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16395,"bootTime":1699550250,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:50:45.420366  742569 start.go:138] virtualization:  
	I1109 21:50:45.422882  742569 out.go:177] * [ingress-addon-legacy-861900] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:50:45.425127  742569 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:50:45.427229  742569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:50:45.425297  742569 notify.go:220] Checking for updates...
	I1109 21:50:45.430957  742569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:50:45.432630  742569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:50:45.434427  742569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:50:45.436219  742569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:50:45.438224  742569 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:50:45.462095  742569 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:50:45.462195  742569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:50:45.538771  742569 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 21:50:45.529340072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:50:45.538927  742569 docker.go:295] overlay module found
	I1109 21:50:45.541147  742569 out.go:177] * Using the docker driver based on user configuration
	I1109 21:50:45.542856  742569 start.go:298] selected driver: docker
	I1109 21:50:45.542874  742569 start.go:902] validating driver "docker" against <nil>
	I1109 21:50:45.542893  742569 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:50:45.543538  742569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:50:45.610265  742569 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 21:50:45.601351572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:50:45.610448  742569 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1109 21:50:45.610675  742569 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 21:50:45.613035  742569 out.go:177] * Using Docker driver with root privileges
	I1109 21:50:45.615065  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:50:45.615083  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:50:45.615099  742569 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 21:50:45.615121  742569 start_flags.go:323] config:
	{Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:50:45.617118  742569 out.go:177] * Starting control plane node ingress-addon-legacy-861900 in cluster ingress-addon-legacy-861900
	I1109 21:50:45.618988  742569 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:50:45.621019  742569 out.go:177] * Pulling base image ...
	I1109 21:50:45.623388  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:50:45.623477  742569 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:50:45.640600  742569 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1109 21:50:45.640629  742569 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1109 21:50:45.687852  742569 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1109 21:50:45.687891  742569 cache.go:56] Caching tarball of preloaded images
	I1109 21:50:45.688052  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:50:45.690336  742569 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1109 21:50:45.692617  742569 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:45.806723  742569 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1109 21:50:57.889188  742569 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:57.889293  742569 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:59.078610  742569 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1109 21:50:59.078996  742569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json ...
	I1109 21:50:59.079034  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json: {Name:mkfb3684ff169eedb6a0ee7058211adbfaef9a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:50:59.079232  742569 cache.go:194] Successfully downloaded all kic artifacts
	I1109 21:50:59.079282  742569 start.go:365] acquiring machines lock for ingress-addon-legacy-861900: {Name:mk4364e9b38a22c26b621152ffbe453bb0f10d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 21:50:59.079345  742569 start.go:369] acquired machines lock for "ingress-addon-legacy-861900" in 47.024µs
	I1109 21:50:59.079367  742569 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:50:59.079440  742569 start.go:125] createHost starting for "" (driver="docker")
	I1109 21:50:59.081702  742569 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 21:50:59.081924  742569 start.go:159] libmachine.API.Create for "ingress-addon-legacy-861900" (driver="docker")
	I1109 21:50:59.081966  742569 client.go:168] LocalClient.Create starting
	I1109 21:50:59.082039  742569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 21:50:59.082075  742569 main.go:141] libmachine: Decoding PEM data...
	I1109 21:50:59.082096  742569 main.go:141] libmachine: Parsing certificate...
	I1109 21:50:59.082169  742569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 21:50:59.082193  742569 main.go:141] libmachine: Decoding PEM data...
	I1109 21:50:59.082208  742569 main.go:141] libmachine: Parsing certificate...
	I1109 21:50:59.082590  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 21:50:59.100467  742569 cli_runner.go:211] docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 21:50:59.100554  742569 network_create.go:281] running [docker network inspect ingress-addon-legacy-861900] to gather additional debugging logs...
	I1109 21:50:59.100577  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900
	W1109 21:50:59.130495  742569 cli_runner.go:211] docker network inspect ingress-addon-legacy-861900 returned with exit code 1
	I1109 21:50:59.130530  742569 network_create.go:284] error running [docker network inspect ingress-addon-legacy-861900]: docker network inspect ingress-addon-legacy-861900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-861900 not found
	I1109 21:50:59.130544  742569 network_create.go:286] output of [docker network inspect ingress-addon-legacy-861900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-861900 not found
	
	** /stderr **
	I1109 21:50:59.130661  742569 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:50:59.148570  742569 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40005d75d0}
	I1109 21:50:59.148608  742569 network_create.go:124] attempt to create docker network ingress-addon-legacy-861900 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 21:50:59.148670  742569 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 ingress-addon-legacy-861900
	I1109 21:50:59.219319  742569 network_create.go:108] docker network ingress-addon-legacy-861900 192.168.49.0/24 created
	I1109 21:50:59.219355  742569 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-861900" container
	I1109 21:50:59.219432  742569 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 21:50:59.235541  742569 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-861900 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --label created_by.minikube.sigs.k8s.io=true
	I1109 21:50:59.254545  742569 oci.go:103] Successfully created a docker volume ingress-addon-legacy-861900
	I1109 21:50:59.254632  742569 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-861900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --entrypoint /usr/bin/test -v ingress-addon-legacy-861900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1109 21:51:00.787720  742569 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-861900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --entrypoint /usr/bin/test -v ingress-addon-legacy-861900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.53303961s)
	I1109 21:51:00.787753  742569 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-861900
	I1109 21:51:00.787773  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:51:00.787793  742569 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 21:51:00.787884  742569 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-861900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 21:51:05.744827  742569 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-861900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956896098s)
	I1109 21:51:05.744861  742569 kic.go:203] duration metric: took 4.957065 seconds to extract preloaded images to volume
	W1109 21:51:05.744999  742569 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 21:51:05.745108  742569 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 21:51:05.810054  742569 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-861900 --name ingress-addon-legacy-861900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --network ingress-addon-legacy-861900 --ip 192.168.49.2 --volume ingress-addon-legacy-861900:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1109 21:51:06.175172  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Running}}
	I1109 21:51:06.201776  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:06.227835  742569 cli_runner.go:164] Run: docker exec ingress-addon-legacy-861900 stat /var/lib/dpkg/alternatives/iptables
	I1109 21:51:06.326570  742569 oci.go:144] the created container "ingress-addon-legacy-861900" has a running status.
	I1109 21:51:06.326608  742569 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa...
	I1109 21:51:07.383662  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 21:51:07.383751  742569 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 21:51:07.410675  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:07.429099  742569 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 21:51:07.429123  742569 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-861900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 21:51:07.497359  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:07.520153  742569 machine.go:88] provisioning docker machine ...
	I1109 21:51:07.520188  742569 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-861900"
	I1109 21:51:07.520254  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:07.538079  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:07.538546  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:07.538567  742569 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-861900 && echo "ingress-addon-legacy-861900" | sudo tee /etc/hostname
	I1109 21:51:07.692071  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-861900
	
	I1109 21:51:07.692208  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:07.710427  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:07.710841  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:07.710860  742569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-861900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-861900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-861900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 21:51:07.851506  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 21:51:07.851531  742569 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 21:51:07.851556  742569 ubuntu.go:177] setting up certificates
	I1109 21:51:07.851564  742569 provision.go:83] configureAuth start
	I1109 21:51:07.851636  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:07.869018  742569 provision.go:138] copyHostCerts
	I1109 21:51:07.869052  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 21:51:07.869084  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 21:51:07.869091  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 21:51:07.869163  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 21:51:07.869246  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 21:51:07.869262  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 21:51:07.869266  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 21:51:07.869292  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 21:51:07.869340  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 21:51:07.869355  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 21:51:07.869359  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 21:51:07.869390  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 21:51:07.869441  742569 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-861900 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-861900]
	I1109 21:51:08.075809  742569 provision.go:172] copyRemoteCerts
	I1109 21:51:08.075883  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 21:51:08.075930  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.093952  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.197011  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 21:51:08.197072  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 21:51:08.225868  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 21:51:08.225935  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1109 21:51:08.254344  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 21:51:08.254403  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 21:51:08.282886  742569 provision.go:86] duration metric: configureAuth took 431.30749ms
	I1109 21:51:08.282914  742569 ubuntu.go:193] setting minikube options for container-runtime
	I1109 21:51:08.283109  742569 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:51:08.283259  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.301107  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:08.301538  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:08.301562  742569 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 21:51:08.579643  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 21:51:08.579670  742569 machine.go:91] provisioned docker machine in 1.059498004s
	I1109 21:51:08.579680  742569 client.go:171] LocalClient.Create took 9.497706313s
	I1109 21:51:08.579692  742569 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-861900" took 9.497768565s
	I1109 21:51:08.579699  742569 start.go:300] post-start starting for "ingress-addon-legacy-861900" (driver="docker")
	I1109 21:51:08.579710  742569 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 21:51:08.579787  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 21:51:08.579830  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.597508  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.697175  742569 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 21:51:08.701298  742569 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 21:51:08.701338  742569 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 21:51:08.701372  742569 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 21:51:08.701387  742569 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 21:51:08.701404  742569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 21:51:08.701488  742569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 21:51:08.701575  742569 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 21:51:08.701587  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /etc/ssl/certs/7135732.pem
	I1109 21:51:08.701706  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 21:51:08.711870  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:51:08.739420  742569 start.go:303] post-start completed in 159.706147ms
	I1109 21:51:08.739839  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:08.756416  742569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json ...
	I1109 21:51:08.756692  742569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 21:51:08.756738  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.774683  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.872239  742569 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 21:51:08.877839  742569 start.go:128] duration metric: createHost completed in 9.798384762s
	I1109 21:51:08.877864  742569 start.go:83] releasing machines lock for "ingress-addon-legacy-861900", held for 9.798507708s
	I1109 21:51:08.877960  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:08.896144  742569 ssh_runner.go:195] Run: cat /version.json
	I1109 21:51:08.896203  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.896431  742569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 21:51:08.896491  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.918883  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.925958  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:09.151387  742569 ssh_runner.go:195] Run: systemctl --version
	I1109 21:51:09.157163  742569 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 21:51:09.304640  742569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 21:51:09.310408  742569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:51:09.333841  742569 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 21:51:09.333921  742569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:51:09.371885  742569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1109 21:51:09.371905  742569 start.go:472] detecting cgroup driver to use...
	I1109 21:51:09.371937  742569 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 21:51:09.371985  742569 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 21:51:09.389629  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 21:51:09.403784  742569 docker.go:203] disabling cri-docker service (if available) ...
	I1109 21:51:09.403898  742569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 21:51:09.419756  742569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 21:51:09.436395  742569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 21:51:09.527603  742569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 21:51:09.629373  742569 docker.go:219] disabling docker service ...
	I1109 21:51:09.629452  742569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 21:51:09.654697  742569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 21:51:09.669540  742569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 21:51:09.774800  742569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 21:51:09.878578  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 21:51:09.893253  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 21:51:09.912616  742569 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1109 21:51:09.912735  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.924690  742569 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 21:51:09.924814  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.937102  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.948967  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.960824  742569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 21:51:09.972144  742569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 21:51:09.985875  742569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 21:51:09.996168  742569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 21:51:10.101340  742569 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 21:51:10.234932  742569 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 21:51:10.235062  742569 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 21:51:10.239693  742569 start.go:540] Will wait 60s for crictl version
	I1109 21:51:10.239792  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:10.245139  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 21:51:10.287983  742569 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 21:51:10.288090  742569 ssh_runner.go:195] Run: crio --version
	I1109 21:51:10.332141  742569 ssh_runner.go:195] Run: crio --version
	I1109 21:51:10.380071  742569 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1109 21:51:10.381875  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:51:10.401607  742569 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 21:51:10.406464  742569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:51:10.420155  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:51:10.421626  742569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:51:10.475348  742569 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1109 21:51:10.475419  742569 ssh_runner.go:195] Run: which lz4
	I1109 21:51:10.479723  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1109 21:51:10.479845  742569 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1109 21:51:10.484056  742569 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 21:51:10.484088  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1109 21:51:12.514561  742569 crio.go:444] Took 2.034742 seconds to copy over tarball
	I1109 21:51:12.514636  742569 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 21:51:15.183705  742569 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669040061s)
	I1109 21:51:15.183729  742569 crio.go:451] Took 2.669144 seconds to extract the tarball
	I1109 21:51:15.183739  742569 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 21:51:15.348173  742569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:51:15.386830  742569 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1109 21:51:15.386856  742569 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1109 21:51:15.386901  742569 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.386925  742569 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.387075  742569 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.387079  742569 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1109 21:51:15.387137  742569 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.387152  742569 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.387211  742569 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1109 21:51:15.387227  742569 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.388383  742569 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.388785  742569 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.389038  742569 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.389227  742569 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.389445  742569 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.389504  742569 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.389547  742569 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1109 21:51:15.389595  742569 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1109 21:51:15.728950  742569 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.729121  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1109 21:51:15.752629  742569 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.752819  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.769476  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1109 21:51:15.794603  742569 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1109 21:51:15.794705  742569 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.794781  742569 ssh_runner.go:195] Run: which crictl
	W1109 21:51:15.800176  742569 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.800482  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1109 21:51:15.810604  742569 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.811051  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1109 21:51:15.812120  742569 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.812311  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1109 21:51:15.832712  742569 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.832934  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.837303  742569 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1109 21:51:15.837387  742569 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.837464  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.870614  742569 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1109 21:51:15.870792  742569 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1109 21:51:15.870749  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.870860  742569 ssh_runner.go:195] Run: which crictl
	W1109 21:51:15.926516  742569 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.926739  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.977479  742569 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1109 21:51:15.977562  742569 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.977645  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.980981  742569 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1109 21:51:15.981062  742569 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.981135  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.981253  742569 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1109 21:51:15.981288  742569 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1109 21:51:15.981330  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.995689  742569 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1109 21:51:15.995769  742569 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.995848  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.995958  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:16.024989  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1109 21:51:16.025149  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1109 21:51:16.176849  742569 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1109 21:51:16.176896  742569 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:16.176946  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:16.177020  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:16.177024  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:16.177091  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1109 21:51:16.177162  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1109 21:51:16.177230  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:16.177248  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1109 21:51:16.282115  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1109 21:51:16.282228  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:16.282393  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1109 21:51:16.282283  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1109 21:51:16.282341  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1109 21:51:16.332757  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1109 21:51:16.332823  742569 cache_images.go:92] LoadImages completed in 945.954656ms
	W1109 21:51:16.332904  742569 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1109 21:51:16.332972  742569 ssh_runner.go:195] Run: crio config
	I1109 21:51:16.384545  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:51:16.384565  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:51:16.384595  742569 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 21:51:16.384619  742569 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-861900 NodeName:ingress-addon-legacy-861900 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1109 21:51:16.384764  742569 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-861900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 21:51:16.384854  742569 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-861900 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 21:51:16.384923  742569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1109 21:51:16.395320  742569 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 21:51:16.395390  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 21:51:16.406910  742569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1109 21:51:16.427342  742569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1109 21:51:16.447689  742569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1109 21:51:16.468801  742569 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 21:51:16.473284  742569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:51:16.486543  742569 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900 for IP: 192.168.49.2
	I1109 21:51:16.486578  742569 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.486777  742569 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 21:51:16.486864  742569 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 21:51:16.486927  742569 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key
	I1109 21:51:16.486941  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt with IP's: []
	I1109 21:51:16.958378  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt ...
	I1109 21:51:16.958409  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: {Name:mk7aa7e55e97645ec9e7306f3f97250a72dcf0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.958628  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key ...
	I1109 21:51:16.958644  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key: {Name:mkb5a3ab7cc38c0584227f902360b3e2a653e988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.958729  742569 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2
	I1109 21:51:16.958759  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 21:51:17.523161  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 ...
	I1109 21:51:17.523197  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2: {Name:mk4551e780e11b77718a4e1dcaba07e2f499c6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:17.523388  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2 ...
	I1109 21:51:17.523404  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2: {Name:mk90613120d42f665b93b64cf37583d163f4b9a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:17.523489  742569 certs.go:337] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt
	I1109 21:51:17.523575  742569 certs.go:341] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key
	I1109 21:51:17.523637  742569 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key
	I1109 21:51:17.523655  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt with IP's: []
	I1109 21:51:18.352516  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt ...
	I1109 21:51:18.352548  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt: {Name:mkf7d0a3a5f2788403184380e5ad82ce03c7e1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:18.352735  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key ...
	I1109 21:51:18.352749  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key: {Name:mk74e0f683b73cde82493d7dd6ec33b6dc3d2eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:18.352834  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 21:51:18.352855  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 21:51:18.352873  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 21:51:18.352894  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 21:51:18.352909  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 21:51:18.352920  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 21:51:18.352936  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 21:51:18.352952  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 21:51:18.353013  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem (1338 bytes)
	W1109 21:51:18.353054  742569 certs.go:433] ignoring /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573_empty.pem, impossibly tiny 0 bytes
	I1109 21:51:18.353065  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 21:51:18.353096  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 21:51:18.353128  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 21:51:18.353156  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 21:51:18.353205  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:51:18.353239  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem -> /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.353256  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.353268  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.353850  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 21:51:18.382137  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 21:51:18.409997  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 21:51:18.436959  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 21:51:18.464067  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 21:51:18.491092  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 21:51:18.518963  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 21:51:18.546248  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 21:51:18.573679  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem --> /usr/share/ca-certificates/713573.pem (1338 bytes)
	I1109 21:51:18.601235  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /usr/share/ca-certificates/7135732.pem (1708 bytes)
	I1109 21:51:18.628790  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 21:51:18.656502  742569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 21:51:18.676612  742569 ssh_runner.go:195] Run: openssl version
	I1109 21:51:18.683437  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/713573.pem && ln -fs /usr/share/ca-certificates/713573.pem /etc/ssl/certs/713573.pem"
	I1109 21:51:18.694541  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.698958  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.699017  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.707268  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/713573.pem /etc/ssl/certs/51391683.0"
	I1109 21:51:18.718509  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135732.pem && ln -fs /usr/share/ca-certificates/7135732.pem /etc/ssl/certs/7135732.pem"
	I1109 21:51:18.729338  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.733817  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.733926  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.742173  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7135732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 21:51:18.753770  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 21:51:18.764674  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.769285  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.769350  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.777787  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 21:51:18.789075  742569 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 21:51:18.793479  742569 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 21:51:18.793581  742569 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:51:18.793697  742569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 21:51:18.793757  742569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:51:18.834384  742569 cri.go:89] found id: ""
	I1109 21:51:18.834498  742569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 21:51:18.845209  742569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 21:51:18.855608  742569 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1109 21:51:18.855700  742569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 21:51:18.865881  742569 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 21:51:18.865928  742569 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 21:51:18.921743  742569 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1109 21:51:18.921915  742569 kubeadm.go:322] [preflight] Running pre-flight checks
	I1109 21:51:18.979536  742569 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1109 21:51:18.979606  742569 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1109 21:51:18.979648  742569 kubeadm.go:322] OS: Linux
	I1109 21:51:18.979705  742569 kubeadm.go:322] CGROUPS_CPU: enabled
	I1109 21:51:18.979755  742569 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1109 21:51:18.979804  742569 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1109 21:51:18.979854  742569 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1109 21:51:18.979902  742569 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1109 21:51:18.979952  742569 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1109 21:51:19.076089  742569 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 21:51:19.076205  742569 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 21:51:19.076297  742569 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 21:51:19.308268  742569 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 21:51:19.309675  742569 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 21:51:19.309920  742569 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1109 21:51:19.414708  742569 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 21:51:19.417842  742569 out.go:204]   - Generating certificates and keys ...
	I1109 21:51:19.417943  742569 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1109 21:51:19.418036  742569 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1109 21:51:20.914684  742569 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 21:51:21.520430  742569 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1109 21:51:21.890763  742569 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1109 21:51:22.512348  742569 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1109 21:51:22.803467  742569 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1109 21:51:22.804017  742569 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-861900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:51:22.914688  742569 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1109 21:51:22.915066  742569 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-861900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:51:23.296053  742569 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 21:51:23.468116  742569 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 21:51:23.812655  742569 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1109 21:51:23.813072  742569 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 21:51:24.879276  742569 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 21:51:25.188308  742569 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 21:51:25.421218  742569 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 21:51:25.868403  742569 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 21:51:25.869649  742569 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 21:51:25.871879  742569 out.go:204]   - Booting up control plane ...
	I1109 21:51:25.871970  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 21:51:25.886735  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 21:51:25.888699  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 21:51:25.890244  742569 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 21:51:25.893211  742569 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 21:51:38.896071  742569 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002358 seconds
	I1109 21:51:38.896192  742569 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 21:51:38.912125  742569 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 21:51:39.430085  742569 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 21:51:39.430232  742569 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-861900 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1109 21:51:39.938956  742569 kubeadm.go:322] [bootstrap-token] Using token: sofv9u.ps5ywt9mluyicgmk
	I1109 21:51:39.941443  742569 out.go:204]   - Configuring RBAC rules ...
	I1109 21:51:39.941581  742569 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 21:51:39.945514  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 21:51:39.953941  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 21:51:39.958041  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 21:51:39.962116  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 21:51:39.967808  742569 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 21:51:39.983106  742569 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 21:51:40.270745  742569 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1109 21:51:40.379981  742569 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1109 21:51:40.385004  742569 kubeadm.go:322] 
	I1109 21:51:40.385078  742569 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1109 21:51:40.385090  742569 kubeadm.go:322] 
	I1109 21:51:40.385162  742569 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1109 21:51:40.385174  742569 kubeadm.go:322] 
	I1109 21:51:40.385199  742569 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1109 21:51:40.385277  742569 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 21:51:40.385337  742569 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 21:51:40.385346  742569 kubeadm.go:322] 
	I1109 21:51:40.385395  742569 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1109 21:51:40.385468  742569 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 21:51:40.385534  742569 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 21:51:40.385544  742569 kubeadm.go:322] 
	I1109 21:51:40.385623  742569 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 21:51:40.385702  742569 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1109 21:51:40.385711  742569 kubeadm.go:322] 
	I1109 21:51:40.385789  742569 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sofv9u.ps5ywt9mluyicgmk \
	I1109 21:51:40.385893  742569 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 \
	I1109 21:51:40.386120  742569 kubeadm.go:322]     --control-plane 
	I1109 21:51:40.386135  742569 kubeadm.go:322] 
	I1109 21:51:40.386214  742569 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1109 21:51:40.386222  742569 kubeadm.go:322] 
	I1109 21:51:40.386299  742569 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sofv9u.ps5ywt9mluyicgmk \
	I1109 21:51:40.386431  742569 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 21:51:40.389560  742569 kubeadm.go:322] W1109 21:51:18.920948    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1109 21:51:40.389796  742569 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 21:51:40.389906  742569 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 21:51:40.390056  742569 kubeadm.go:322] W1109 21:51:25.887010    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 21:51:40.390192  742569 kubeadm.go:322] W1109 21:51:25.888953    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 21:51:40.390217  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:51:40.390233  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:51:40.392478  742569 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 21:51:40.394478  742569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 21:51:40.399865  742569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1109 21:51:40.399888  742569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 21:51:40.421714  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 21:51:40.936393  742569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 21:51:40.936537  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:40.936611  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b minikube.k8s.io/name=ingress-addon-legacy-861900 minikube.k8s.io/updated_at=2023_11_09T21_51_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.077408  742569 ops.go:34] apiserver oom_adj: -16
	I1109 21:51:41.077491  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.169448  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.765046  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:42.265317  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:42.765444  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:43.264564  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:43.764483  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:44.265301  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:44.765188  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:45.264504  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:45.764457  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:46.265016  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:46.765301  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:47.265014  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:47.764435  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:48.265456  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:48.765154  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:49.265192  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:49.765286  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:50.265524  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:50.765305  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:51.265209  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:51.764573  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:52.264820  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:52.764817  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:53.265309  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:53.764921  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:54.264509  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:54.765232  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:55.264745  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:55.400254  742569 kubeadm.go:1081] duration metric: took 14.463769128s to wait for elevateKubeSystemPrivileges.
	I1109 21:51:55.400284  742569 kubeadm.go:406] StartCluster complete in 36.606717352s
	I1109 21:51:55.400301  742569 settings.go:142] acquiring lock: {Name:mk717b4baf2280543b738622644195ea0d60d476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:55.400360  742569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:51:55.401151  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/kubeconfig: {Name:mk5701fd19491b0b49f183ef877286e38ea5f8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:55.401886  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:55.402502  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 21:51:55.402777  742569 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:51:55.402813  742569 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1109 21:51:55.402876  742569 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-861900"
	I1109 21:51:55.402891  742569 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-861900"
	I1109 21:51:55.402945  742569 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:51:55.403414  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.403885  742569 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 21:51:55.404037  742569 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-861900"
	I1109 21:51:55.404059  742569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-861900"
	I1109 21:51:55.404378  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.467424  742569 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:55.469812  742569 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:51:55.469839  742569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 21:51:55.469907  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:55.468221  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:55.471197  742569 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-861900"
	I1109 21:51:55.471245  742569 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:51:55.471733  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.508603  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:55.519846  742569 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 21:51:55.519869  742569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 21:51:55.519929  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:55.541586  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:55.591319  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 21:51:55.593095  742569 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-861900" context rescaled to 1 replicas
	I1109 21:51:55.593134  742569 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:51:55.595281  742569 out.go:177] * Verifying Kubernetes components...
	I1109 21:51:55.597815  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:51:55.727386  742569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:51:55.810877  742569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 21:51:56.161052  742569 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 21:51:56.161718  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:56.161977  742569 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-861900" to be "Ready" ...
	I1109 21:51:56.297125  742569 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 21:51:56.299632  742569 addons.go:502] enable addons completed in 896.808823ms: enabled=[storage-provisioner default-storageclass]
	I1109 21:51:58.182169  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:00.679679  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:02.680215  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:04.180104  742569 node_ready.go:49] node "ingress-addon-legacy-861900" has status "Ready":"True"
	I1109 21:52:04.180129  742569 node_ready.go:38] duration metric: took 8.018130775s waiting for node "ingress-addon-legacy-861900" to be "Ready" ...
	I1109 21:52:04.180140  742569 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:52:04.187621  742569 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:06.204386  742569 pod_ready.go:102] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"False"
	I1109 21:52:08.206235  742569 pod_ready.go:102] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"False"
	I1109 21:52:10.203876  742569 pod_ready.go:92] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.203902  742569 pod_ready.go:81] duration metric: took 6.016247789s waiting for pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.203914  742569 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.208348  742569 pod_ready.go:92] pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.208369  742569 pod_ready.go:81] duration metric: took 4.447883ms waiting for pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.208382  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.212566  742569 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.212589  742569 pod_ready.go:81] duration metric: took 4.199769ms waiting for pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.212600  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.217160  742569 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.217181  742569 pod_ready.go:81] duration metric: took 4.573873ms waiting for pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.217192  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzpwp" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.221490  742569 pod_ready.go:92] pod "kube-proxy-hzpwp" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.221513  742569 pod_ready.go:81] duration metric: took 4.314428ms waiting for pod "kube-proxy-hzpwp" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.221526  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.398836  742569 request.go:629] Waited for 177.227916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-861900
	I1109 21:52:10.598773  742569 request.go:629] Waited for 197.305791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-861900
	I1109 21:52:10.601409  742569 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.601433  742569 pod_ready.go:81] duration metric: took 379.899051ms waiting for pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.601446  742569 pod_ready.go:38] duration metric: took 6.421289072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:52:10.601460  742569 api_server.go:52] waiting for apiserver process to appear ...
	I1109 21:52:10.601552  742569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:52:10.614287  742569 api_server.go:72] duration metric: took 15.021115193s to wait for apiserver process to appear ...
	I1109 21:52:10.614320  742569 api_server.go:88] waiting for apiserver healthz status ...
	I1109 21:52:10.614338  742569 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 21:52:10.623265  742569 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 21:52:10.624104  742569 api_server.go:141] control plane version: v1.18.20
	I1109 21:52:10.624127  742569 api_server.go:131] duration metric: took 9.799984ms to wait for apiserver health ...
	I1109 21:52:10.624135  742569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 21:52:10.799484  742569 request.go:629] Waited for 175.290275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1109 21:52:10.805318  742569 system_pods.go:59] 8 kube-system pods found
	I1109 21:52:10.805357  742569 system_pods.go:61] "coredns-66bff467f8-xvlpj" [21a49005-d70f-4ed3-b4ee-c152858ec6bb] Running
	I1109 21:52:10.805364  742569 system_pods.go:61] "etcd-ingress-addon-legacy-861900" [0e493dc6-a6ba-470f-bb52-1de4dffd8513] Running
	I1109 21:52:10.805370  742569 system_pods.go:61] "kindnet-qmz79" [5c7f9d10-cffa-44a4-ab40-247ae020d804] Running
	I1109 21:52:10.805375  742569 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-861900" [f983107f-d0be-4cc0-aea8-9c14d4795bcd] Running
	I1109 21:52:10.805381  742569 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-861900" [eed21be3-48a7-4bce-8725-c99487aacb55] Running
	I1109 21:52:10.805385  742569 system_pods.go:61] "kube-proxy-hzpwp" [9ef89c7b-9e45-4303-a315-31aa5a71b12a] Running
	I1109 21:52:10.805390  742569 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-861900" [81600c7c-dac1-42cc-aa86-7d5dd3d7eb03] Running
	I1109 21:52:10.805395  742569 system_pods.go:61] "storage-provisioner" [d1a286b9-e693-4d7c-88d0-ab36ed6c87a8] Running
	I1109 21:52:10.805406  742569 system_pods.go:74] duration metric: took 181.265332ms to wait for pod list to return data ...
	I1109 21:52:10.805416  742569 default_sa.go:34] waiting for default service account to be created ...
	I1109 21:52:10.998774  742569 request.go:629] Waited for 193.286254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1109 21:52:11.001774  742569 default_sa.go:45] found service account: "default"
	I1109 21:52:11.001805  742569 default_sa.go:55] duration metric: took 196.381865ms for default service account to be created ...
	I1109 21:52:11.001816  742569 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 21:52:11.199217  742569 request.go:629] Waited for 197.331898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1109 21:52:11.205095  742569 system_pods.go:86] 8 kube-system pods found
	I1109 21:52:11.205125  742569 system_pods.go:89] "coredns-66bff467f8-xvlpj" [21a49005-d70f-4ed3-b4ee-c152858ec6bb] Running
	I1109 21:52:11.205135  742569 system_pods.go:89] "etcd-ingress-addon-legacy-861900" [0e493dc6-a6ba-470f-bb52-1de4dffd8513] Running
	I1109 21:52:11.205143  742569 system_pods.go:89] "kindnet-qmz79" [5c7f9d10-cffa-44a4-ab40-247ae020d804] Running
	I1109 21:52:11.205148  742569 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-861900" [f983107f-d0be-4cc0-aea8-9c14d4795bcd] Running
	I1109 21:52:11.205183  742569 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-861900" [eed21be3-48a7-4bce-8725-c99487aacb55] Running
	I1109 21:52:11.205196  742569 system_pods.go:89] "kube-proxy-hzpwp" [9ef89c7b-9e45-4303-a315-31aa5a71b12a] Running
	I1109 21:52:11.205201  742569 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-861900" [81600c7c-dac1-42cc-aa86-7d5dd3d7eb03] Running
	I1109 21:52:11.205206  742569 system_pods.go:89] "storage-provisioner" [d1a286b9-e693-4d7c-88d0-ab36ed6c87a8] Running
	I1109 21:52:11.205212  742569 system_pods.go:126] duration metric: took 203.390352ms to wait for k8s-apps to be running ...
	I1109 21:52:11.205224  742569 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 21:52:11.205293  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:52:11.219193  742569 system_svc.go:56] duration metric: took 13.960189ms WaitForService to wait for kubelet.
	I1109 21:52:11.219220  742569 kubeadm.go:581] duration metric: took 15.626055001s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 21:52:11.219240  742569 node_conditions.go:102] verifying NodePressure condition ...
	I1109 21:52:11.399644  742569 request.go:629] Waited for 180.306639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1109 21:52:11.403829  742569 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 21:52:11.403863  742569 node_conditions.go:123] node cpu capacity is 2
	I1109 21:52:11.403877  742569 node_conditions.go:105] duration metric: took 184.631504ms to run NodePressure ...
	I1109 21:52:11.403905  742569 start.go:228] waiting for startup goroutines ...
	I1109 21:52:11.403919  742569 start.go:233] waiting for cluster config update ...
	I1109 21:52:11.403942  742569 start.go:242] writing updated cluster config ...
	I1109 21:52:11.404224  742569 ssh_runner.go:195] Run: rm -f paused
	I1109 21:52:11.465367  742569 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1109 21:52:11.467894  742569 out.go:177] 
	W1109 21:52:11.470209  742569 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1109 21:52:11.472261  742569 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1109 21:52:11.474414  742569 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-861900" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 09 21:56:33 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:33.742160172Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=f27a88af-46e6-4b1b-a058-2f9a223f9ba4 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:43 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:43.658045633Z" level=info msg="Checking image status: k8s.gcr.io/pause:3.2" id=e843f596-4c26-4e61-9871-3b1a69b37729 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:43 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:43.658281858Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e843f596-4c26-4e61-9871-3b1a69b37729 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:44 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:44.741699985Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=66c1ece8-1da9-48aa-8c33-39047f17f21b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:44 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:44.741983389Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=66c1ece8-1da9-48aa-8c33-39047f17f21b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:58 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:58.741535684Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=0c75744a-d29a-422e-a1e6-047741712939 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:56:58 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:56:58.741810382Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=0c75744a-d29a-422e-a1e6-047741712939 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:09 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:09.741881175Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b7a478c1-3671-4ad0-837a-39f95f1c03bb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:09 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:09.742166942Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b7a478c1-3671-4ad0-837a-39f95f1c03bb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:21.741617210Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=e06d14ba-d3bf-45db-b6c9-7d68300e646e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:21.741883768Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=e06d14ba-d3bf-45db-b6c9-7d68300e646e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:21.742108842Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=059672b6-5ff5-481e-beef-d5814416a93b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:21.742295664Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=059672b6-5ff5-481e-beef-d5814416a93b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:32 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:32.741609402Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=30987e64-3fe7-45bc-ac8b-eb110204502a name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:32 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:32.741890689Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=30987e64-3fe7-45bc-ac8b-eb110204502a name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:32 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:32.742749466Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=a8c724ec-32fe-4c02-b03c-c85db483fb49 name=/runtime.v1alpha2.ImageService/PullImage
	Nov 09 21:57:32 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:32.744831400Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:33 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:33.742277976Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=5ab7ed41-7dfe-4bcf-820e-23c9cd492cf3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:33 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:33.742580711Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=5ab7ed41-7dfe-4bcf-820e-23c9cd492cf3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:48 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:48.741508995Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=56799193-e281-490a-87cc-f0658b4480a7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:48 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:48.741783431Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=56799193-e281-490a-87cc-f0658b4480a7 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:59 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:59.741539415Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=103aa203-b619-4b0f-b927-1350f75d4d56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:57:59 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:57:59.741809304Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=103aa203-b619-4b0f-b927-1350f75d4d56 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:11 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:11.741622862Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ea71389c-0e73-477a-b911-ae23aee0c073 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:11 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:11.741896297Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ea71389c-0e73-477a-b911-ae23aee0c073 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8fbecc9c3f547       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   37d1ea607b8b5       storage-provisioner
	2376cb1b3a6b6       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   c08abe0554ec6       coredns-66bff467f8-xvlpj
	12c0413d19e2a       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   2c2e4cab23364       kindnet-qmz79
	6e4b6f3bb3bee       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   2e40e19b9b394       kube-proxy-hzpwp
	4ff81395ca098       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   4b8298eaa7ed3       kube-scheduler-ingress-addon-legacy-861900
	89853e1bb576e       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   f9b15b2de5254       etcd-ingress-addon-legacy-861900
	7e2e0409daae4       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   61596e31e7a39       kube-controller-manager-ingress-addon-legacy-861900
	e7bf2710aeb7b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   9c199d47751a8       kube-apiserver-ingress-addon-legacy-861900
	
	* 
	* ==> coredns [2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:48639 - 30310 "HINFO IN 41319439878355309.3327441200404581037. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.023292771s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-861900
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-861900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b
	                    minikube.k8s.io/name=ingress-addon-legacy-861900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_09T21_51_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 21:51:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-861900
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 21:58:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:52:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-861900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 17ba4a3fdd1b457bacd86c8440d8632f
	  System UUID:                994f0811-8333-4938-90be-1fff4e2582ae
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-ccr5n                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-admission-patch-rgzmj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-dc48v              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m1s
	  kube-system                 coredns-66bff467f8-xvlpj                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m19s
	  kube-system                 etcd-ingress-addon-legacy-861900                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kindnet-qmz79                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m19s
	  kube-system                 kube-apiserver-ingress-addon-legacy-861900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-861900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-proxy-hzpwp                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-scheduler-ingress-addon-legacy-861900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  6m44s (x5 over 6m44s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s (x5 over 6m44s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s (x4 over 6m44s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m30s                  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s                  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s                  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m17s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m10s                  kubelet     Node ingress-addon-legacy-861900 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001047] FS-Cache: O-key=[8] '04613b0000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000009519ed76
	[  +0.001234] FS-Cache: N-key=[8] '04613b0000000000'
	[  +1.883823] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000005eb91895
	[  +0.001121] FS-Cache: O-key=[8] '03613b0000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000afe277c2
	[  +0.001058] FS-Cache: N-key=[8] '03613b0000000000'
	[  +0.314346] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000000067384c
	[  +0.001081] FS-Cache: O-key=[8] '09613b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000004e0bd103
	[  +0.001050] FS-Cache: N-key=[8] '09613b0000000000'
	[  +3.214848] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=000000004b6c5454{9P.session} n=0000000040db7851
	[  +0.001155] FS-Cache: O-key=[10] '34323938393639353234'
	[  +0.000778] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=000000004b6c5454{9P.session} n=00000000aa25bbf1
	[  +0.001089] FS-Cache: N-key=[10] '34323938393639353234'
	
	* 
	* ==> etcd [89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9] <==
	* raft2023/11/09 21:51:31 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/09 21:51:31 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/09 21:51:31 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/09 21:51:31 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-09 21:51:31.918658 W | auth: simple token is not cryptographically signed
	2023-11-09 21:51:32.002451 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-09 21:51:32.038429 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-09 21:51:32.054515 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-09 21:51:32.266436 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-09 21:51:32.294332 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-09 21:51:32.322334 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/09 21:51:32 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-09 21:51:32.908824 I | etcdserver: published {Name:ingress-addon-legacy-861900 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-09 21:51:32.908874 I | embed: ready to serve client requests
	2023-11-09 21:51:32.919967 I | embed: ready to serve client requests
	2023-11-09 21:51:33.026453 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-09 21:51:33.046429 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-09 21:51:33.062288 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-09 21:51:33.062384 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-09 21:51:33.067667 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  21:58:13 up  4:40,  0 users,  load average: 0.03, 0.43, 0.84
	Linux ingress-addon-legacy-861900 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87] <==
	* I1109 21:56:08.362037       1 main.go:227] handling current node
	I1109 21:56:18.371875       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:56:18.371902       1 main.go:227] handling current node
	I1109 21:56:28.374848       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:56:28.374875       1 main.go:227] handling current node
	I1109 21:56:38.381430       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:56:38.381463       1 main.go:227] handling current node
	I1109 21:56:48.384453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:56:48.384483       1 main.go:227] handling current node
	I1109 21:56:58.388506       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:56:58.388536       1 main.go:227] handling current node
	I1109 21:57:08.394991       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:08.395022       1 main.go:227] handling current node
	I1109 21:57:18.398004       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:18.398033       1 main.go:227] handling current node
	I1109 21:57:28.403377       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:28.403406       1 main.go:227] handling current node
	I1109 21:57:38.415154       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:38.415184       1 main.go:227] handling current node
	I1109 21:57:48.419130       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:48.419157       1 main.go:227] handling current node
	I1109 21:57:58.422362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:58.422390       1 main.go:227] handling current node
	I1109 21:58:08.433216       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:08.433245       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72] <==
	* E1109 21:51:37.108556       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1109 21:51:37.214411       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1109 21:51:37.214517       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1109 21:51:37.289846       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1109 21:51:37.293027       1 cache.go:39] Caches are synced for autoregister controller
	I1109 21:51:37.293343       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 21:51:37.317978       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1109 21:51:37.383193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 21:51:38.082133       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1109 21:51:38.082167       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1109 21:51:38.089467       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1109 21:51:38.094420       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1109 21:51:38.094507       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1109 21:51:38.489536       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 21:51:38.526376       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1109 21:51:38.588184       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1109 21:51:38.589188       1 controller.go:609] quota admission added evaluator for: endpoints
	I1109 21:51:38.595275       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 21:51:39.515633       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1109 21:51:40.253705       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1109 21:51:40.349020       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1109 21:51:43.660838       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 21:51:54.933058       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1109 21:51:54.949415       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1109 21:52:12.330201       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec] <==
	* W1109 21:51:55.025406       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-861900. Assuming now as a timestamp.
	I1109 21:51:55.025446       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1109 21:51:55.025737       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1109 21:51:55.027611       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-861900", UID:"13083f17-da80-4417-be3a-db6cdc777fb1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-861900 event: Registered Node ingress-addon-legacy-861900 in Controller
	I1109 21:51:55.076310       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8870801c-b660-481c-9652-ca7ded0789e5", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-s6fzm
	E1109 21:51:55.099527       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6929bc4c-7e8a-424a-86da-6fd51fdfbd76", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835163500, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001794a20), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4001794a40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001794a60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001752f00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4001794a80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001794aa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001794ae0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014f7040), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000fc3878), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000899c70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000de6598)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000fc38c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1109 21:51:55.324054       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1109 21:51:55.330288       1 shared_informer.go:230] Caches are synced for resource quota 
	I1109 21:51:55.487003       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1109 21:51:55.487022       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1109 21:51:55.487287       1 shared_informer.go:230] Caches are synced for attach detach 
	I1109 21:51:55.558253       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1109 21:51:55.566877       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"bcecd9c6-e2e7-4a60-957b-9e58f2a6b868", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1109 21:51:55.579454       1 shared_informer.go:230] Caches are synced for PV protection 
	I1109 21:51:55.579493       1 shared_informer.go:230] Caches are synced for expand 
	I1109 21:51:55.579553       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1109 21:51:55.606927       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8870801c-b660-481c-9652-ca7ded0789e5", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-s6fzm
	I1109 21:51:56.378810       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1109 21:51:56.378848       1 shared_informer.go:230] Caches are synced for resource quota 
	I1109 21:52:05.025977       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1109 21:52:12.339176       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3ab493b9-13e2-4968-9d8d-fda76c205949", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1109 21:52:12.364238       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"967a26a5-b6bd-4d1b-9bfb-025c71119f27", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ccr5n
	I1109 21:52:12.383670       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0f32a6a5-086b-4551-a049-bfbe5bc5fd27", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dc48v
	I1109 21:52:12.401494       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d224efe5-ca39-4e29-aa76-38bfc3ee081b", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rgzmj
	
	* 
	* ==> kube-proxy [6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1] <==
	* W1109 21:51:56.246965       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1109 21:51:56.282103       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1109 21:51:56.282255       1 server_others.go:186] Using iptables Proxier.
	I1109 21:51:56.282742       1 server.go:583] Version: v1.18.20
	I1109 21:51:56.290244       1 config.go:133] Starting endpoints config controller
	I1109 21:51:56.290274       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1109 21:51:56.291757       1 config.go:315] Starting service config controller
	I1109 21:51:56.291782       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1109 21:51:56.390405       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1109 21:51:56.391918       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9] <==
	* I1109 21:51:37.291027       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:51:37.291074       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:51:37.291117       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1109 21:51:37.315029       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 21:51:37.315376       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1109 21:51:37.315499       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 21:51:37.315613       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 21:51:37.315753       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1109 21:51:37.315883       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1109 21:51:37.315987       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 21:51:37.316071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:51:37.316162       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 21:51:37.316251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1109 21:51:37.322151       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 21:51:37.322340       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 21:51:38.128122       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 21:51:38.190921       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 21:51:38.241630       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:51:38.260860       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 21:51:38.274952       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1109 21:51:38.514003       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1109 21:51:41.191263       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1109 21:51:55.113768       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xvlpj is already present in the active queue
	E1109 21:51:55.143936       1 factory.go:503] pod: kube-system/coredns-66bff467f8-s6fzm is already present in the active queue
	E1109 21:51:56.321820       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Nov 09 21:56:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:09.016096    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 09 21:56:21 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:21.742231    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:56:22 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:22.323008    1606 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Nov 09 21:56:22 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:22.323110    1606 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/6d344081-ebfd-49f4-a545-72ba675e86e7-webhook-cert podName:6d344081-ebfd-49f4-a545-72ba675e86e7 nodeName:}" failed. No retries permitted until 2023-11-09 21:58:24.323079714 +0000 UTC m=+404.124168271 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d344081-ebfd-49f4-a545-72ba675e86e7-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-dc48v\" (UID: \"6d344081-ebfd-49f4-a545-72ba675e86e7\") : secret \"ingress-nginx-admission\" not found"
	Nov 09 21:56:32 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:32.741293    1606 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-dc48v_ingress-nginx(6d344081-ebfd-49f4-a545-72ba675e86e7)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-rkb49]: timed out waiting for the condition; skipping pod
	Nov 09 21:56:32 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:32.741338    1606 pod_workers.go:191] Error syncing pod 6d344081-ebfd-49f4-a545-72ba675e86e7 ("ingress-nginx-controller-7fcf777cb7-dc48v_ingress-nginx(6d344081-ebfd-49f4-a545-72ba675e86e7)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-rkb49]: timed out waiting for the condition
	Nov 09 21:56:33 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:33.742602    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:56:43 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:43.802823    1606 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110, memory: /docker/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/system.slice/kubelet.service
	Nov 09 21:56:44 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:44.742195    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:56:58 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:56:58.742235    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:09.506717    1606 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:57:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:09.506780    1606 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:57:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:09.506838    1606 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:57:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:09.506891    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 09 21:57:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:09.742398    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:21 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:21.742982    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:21 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:21.743371    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:33 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:33.742737    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:48 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:48.742008    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:57:59 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:57:59.741951    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:58:03 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:03.030742    1606 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:58:03 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:03.030808    1606 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:58:03 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:03.030868    1606 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:58:03 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:03.030903    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 09 21:58:11 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:11.742235    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41] <==
	* I1109 21:52:08.634347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 21:52:08.649907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 21:52:08.651372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 21:52:08.657253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 21:52:08.657558       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6!
	I1109 21:52:08.658825       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe3ef460-1f88-4dbd-9f61-e631a6d9e3ba", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6 became leader
	I1109 21:52:08.758519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-861900 -n ingress-addon-legacy-861900
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-861900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v: exit status 1 (82.104914ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ccr5n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rgzmj" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-dc48v" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-861900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-861900 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.068160687s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-dc48v

                                                
                                                
** /stderr **
addons_test.go:207: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-861900
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-861900:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110",
	        "Created": "2023-11-09T21:51:05.825345896Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 743031,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T21:51:06.164313049Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/hostname",
	        "HostsPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/hosts",
	        "LogPath": "/var/lib/docker/containers/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110/585037b4c1227e03a99f3bf48114d24470af6d37303c4ae7cf56a41542c4f110-json.log",
	        "Name": "/ingress-addon-legacy-861900",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-861900:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-861900",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4-init/diff:/var/lib/docker/overlay2/7d8c4fc646533218e970cbbc2fae53265551a122428b3ce7f5ec8807d1cc9c68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c8d7ee176c000a04f53c515e94bf6d7dcf6d89aefb4e433df5046cab97170c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-861900",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-861900/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-861900",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-861900",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-861900",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "04d42e0c1ae75de0bb2d9545510cd033f55ec411f420237303f4b8d438827aa5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33690"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33689"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33686"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33688"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33687"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/04d42e0c1ae7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-861900": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "585037b4c122",
	                        "ingress-addon-legacy-861900"
	                    ],
	                    "NetworkID": "7014a50d33f8d4bd752ad2c32fcaf50e13607d4948bf7731d462ff2e96b450f9",
	                    "EndpointID": "bb5f9ab5286632afa6dc31ea8eef8ae45a233daec0e1910488c382c794428c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-861900 -n ingress-addon-legacy-861900
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-861900 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-861900 logs -n 25: (1.368343957s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528 image load                                           | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528 image save --daemon                                  | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-133528               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/713573.pem                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /usr/share/ca-certificates/713573.pem                                  |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/7135732.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /usr/share/ca-certificates/7135732.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh sudo cat                                         | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | /etc/test/nested/copy/713573/hosts                                     |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-133528 ssh pgrep                                            | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-133528 image build -t                                       | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | localhost/my-image:functional-133528                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-133528 image ls                                             | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-133528                                                      | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-133528                                                   | functional-133528           | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:50 UTC |
	| start          | -p ingress-addon-legacy-861900                                         | ingress-addon-legacy-861900 | jenkins | v1.32.0 | 09 Nov 23 21:50 UTC | 09 Nov 23 21:52 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-861900                                            | ingress-addon-legacy-861900 | jenkins | v1.32.0 | 09 Nov 23 21:52 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-861900                                            | ingress-addon-legacy-861900 | jenkins | v1.32.0 | 09 Nov 23 21:58 UTC | 09 Nov 23 21:58 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:50:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:50:45.418463  742569 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:50:45.418591  742569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:50:45.418601  742569 out.go:309] Setting ErrFile to fd 2...
	I1109 21:50:45.418607  742569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:50:45.418864  742569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:50:45.419281  742569 out.go:303] Setting JSON to false
	I1109 21:50:45.420286  742569 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16395,"bootTime":1699550250,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:50:45.420366  742569 start.go:138] virtualization:  
	I1109 21:50:45.422882  742569 out.go:177] * [ingress-addon-legacy-861900] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:50:45.425127  742569 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:50:45.427229  742569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:50:45.425297  742569 notify.go:220] Checking for updates...
	I1109 21:50:45.430957  742569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:50:45.432630  742569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:50:45.434427  742569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:50:45.436219  742569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:50:45.438224  742569 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:50:45.462095  742569 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:50:45.462195  742569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:50:45.538771  742569 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 21:50:45.529340072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:50:45.538927  742569 docker.go:295] overlay module found
	I1109 21:50:45.541147  742569 out.go:177] * Using the docker driver based on user configuration
	I1109 21:50:45.542856  742569 start.go:298] selected driver: docker
	I1109 21:50:45.542874  742569 start.go:902] validating driver "docker" against <nil>
	I1109 21:50:45.542893  742569 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:50:45.543538  742569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:50:45.610265  742569 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 21:50:45.601351572 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:50:45.610448  742569 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1109 21:50:45.610675  742569 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 21:50:45.613035  742569 out.go:177] * Using Docker driver with root privileges
	I1109 21:50:45.615065  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:50:45.615083  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:50:45.615099  742569 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 21:50:45.615121  742569 start_flags.go:323] config:
	{Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:50:45.617118  742569 out.go:177] * Starting control plane node ingress-addon-legacy-861900 in cluster ingress-addon-legacy-861900
	I1109 21:50:45.618988  742569 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:50:45.621019  742569 out.go:177] * Pulling base image ...
	I1109 21:50:45.623388  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:50:45.623477  742569 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:50:45.640600  742569 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1109 21:50:45.640629  742569 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1109 21:50:45.687852  742569 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1109 21:50:45.687891  742569 cache.go:56] Caching tarball of preloaded images
	I1109 21:50:45.688052  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:50:45.690336  742569 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1109 21:50:45.692617  742569 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:45.806723  742569 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1109 21:50:57.889188  742569 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:57.889293  742569 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:50:59.078610  742569 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1109 21:50:59.078996  742569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json ...
	I1109 21:50:59.079034  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json: {Name:mkfb3684ff169eedb6a0ee7058211adbfaef9a25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:50:59.079232  742569 cache.go:194] Successfully downloaded all kic artifacts
	I1109 21:50:59.079282  742569 start.go:365] acquiring machines lock for ingress-addon-legacy-861900: {Name:mk4364e9b38a22c26b621152ffbe453bb0f10d3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 21:50:59.079345  742569 start.go:369] acquired machines lock for "ingress-addon-legacy-861900" in 47.024µs
	I1109 21:50:59.079367  742569 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:50:59.079440  742569 start.go:125] createHost starting for "" (driver="docker")
	I1109 21:50:59.081702  742569 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 21:50:59.081924  742569 start.go:159] libmachine.API.Create for "ingress-addon-legacy-861900" (driver="docker")
	I1109 21:50:59.081966  742569 client.go:168] LocalClient.Create starting
	I1109 21:50:59.082039  742569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 21:50:59.082075  742569 main.go:141] libmachine: Decoding PEM data...
	I1109 21:50:59.082096  742569 main.go:141] libmachine: Parsing certificate...
	I1109 21:50:59.082169  742569 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 21:50:59.082193  742569 main.go:141] libmachine: Decoding PEM data...
	I1109 21:50:59.082208  742569 main.go:141] libmachine: Parsing certificate...
	I1109 21:50:59.082590  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 21:50:59.100467  742569 cli_runner.go:211] docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 21:50:59.100554  742569 network_create.go:281] running [docker network inspect ingress-addon-legacy-861900] to gather additional debugging logs...
	I1109 21:50:59.100577  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900
	W1109 21:50:59.130495  742569 cli_runner.go:211] docker network inspect ingress-addon-legacy-861900 returned with exit code 1
	I1109 21:50:59.130530  742569 network_create.go:284] error running [docker network inspect ingress-addon-legacy-861900]: docker network inspect ingress-addon-legacy-861900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-861900 not found
	I1109 21:50:59.130544  742569 network_create.go:286] output of [docker network inspect ingress-addon-legacy-861900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-861900 not found
	
	** /stderr **
	I1109 21:50:59.130661  742569 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:50:59.148570  742569 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40005d75d0}
	I1109 21:50:59.148608  742569 network_create.go:124] attempt to create docker network ingress-addon-legacy-861900 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 21:50:59.148670  742569 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 ingress-addon-legacy-861900
	I1109 21:50:59.219319  742569 network_create.go:108] docker network ingress-addon-legacy-861900 192.168.49.0/24 created
	I1109 21:50:59.219355  742569 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-861900" container
	I1109 21:50:59.219432  742569 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 21:50:59.235541  742569 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-861900 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --label created_by.minikube.sigs.k8s.io=true
	I1109 21:50:59.254545  742569 oci.go:103] Successfully created a docker volume ingress-addon-legacy-861900
	I1109 21:50:59.254632  742569 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-861900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --entrypoint /usr/bin/test -v ingress-addon-legacy-861900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1109 21:51:00.787720  742569 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-861900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --entrypoint /usr/bin/test -v ingress-addon-legacy-861900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.53303961s)
	I1109 21:51:00.787753  742569 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-861900
	I1109 21:51:00.787773  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:51:00.787793  742569 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 21:51:00.787884  742569 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-861900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 21:51:05.744827  742569 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-861900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.956896098s)
	I1109 21:51:05.744861  742569 kic.go:203] duration metric: took 4.957065 seconds to extract preloaded images to volume
	W1109 21:51:05.744999  742569 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 21:51:05.745108  742569 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 21:51:05.810054  742569 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-861900 --name ingress-addon-legacy-861900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-861900 --network ingress-addon-legacy-861900 --ip 192.168.49.2 --volume ingress-addon-legacy-861900:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1109 21:51:06.175172  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Running}}
	I1109 21:51:06.201776  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:06.227835  742569 cli_runner.go:164] Run: docker exec ingress-addon-legacy-861900 stat /var/lib/dpkg/alternatives/iptables
	I1109 21:51:06.326570  742569 oci.go:144] the created container "ingress-addon-legacy-861900" has a running status.
	I1109 21:51:06.326608  742569 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa...
	I1109 21:51:07.383662  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 21:51:07.383751  742569 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 21:51:07.410675  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:07.429099  742569 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 21:51:07.429123  742569 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-861900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 21:51:07.497359  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:07.520153  742569 machine.go:88] provisioning docker machine ...
	I1109 21:51:07.520188  742569 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-861900"
	I1109 21:51:07.520254  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:07.538079  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:07.538546  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:07.538567  742569 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-861900 && echo "ingress-addon-legacy-861900" | sudo tee /etc/hostname
	I1109 21:51:07.692071  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-861900
	
	I1109 21:51:07.692208  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:07.710427  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:07.710841  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:07.710860  742569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-861900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-861900/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-861900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 21:51:07.851506  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 21:51:07.851531  742569 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 21:51:07.851556  742569 ubuntu.go:177] setting up certificates
	I1109 21:51:07.851564  742569 provision.go:83] configureAuth start
	I1109 21:51:07.851636  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:07.869018  742569 provision.go:138] copyHostCerts
	I1109 21:51:07.869052  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 21:51:07.869084  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 21:51:07.869091  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 21:51:07.869163  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 21:51:07.869246  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 21:51:07.869262  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 21:51:07.869266  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 21:51:07.869292  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 21:51:07.869340  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 21:51:07.869355  742569 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 21:51:07.869359  742569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 21:51:07.869390  742569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 21:51:07.869441  742569 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-861900 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-861900]
	I1109 21:51:08.075809  742569 provision.go:172] copyRemoteCerts
	I1109 21:51:08.075883  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 21:51:08.075930  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.093952  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.197011  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 21:51:08.197072  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 21:51:08.225868  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 21:51:08.225935  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1109 21:51:08.254344  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 21:51:08.254403  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 21:51:08.282886  742569 provision.go:86] duration metric: configureAuth took 431.30749ms
	I1109 21:51:08.282914  742569 ubuntu.go:193] setting minikube options for container-runtime
	I1109 21:51:08.283109  742569 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:51:08.283259  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.301107  742569 main.go:141] libmachine: Using SSH client type: native
	I1109 21:51:08.301538  742569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33690 <nil> <nil>}
	I1109 21:51:08.301562  742569 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 21:51:08.579643  742569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 21:51:08.579670  742569 machine.go:91] provisioned docker machine in 1.059498004s
	I1109 21:51:08.579680  742569 client.go:171] LocalClient.Create took 9.497706313s
	I1109 21:51:08.579692  742569 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-861900" took 9.497768565s
	I1109 21:51:08.579699  742569 start.go:300] post-start starting for "ingress-addon-legacy-861900" (driver="docker")
	I1109 21:51:08.579710  742569 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 21:51:08.579787  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 21:51:08.579830  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.597508  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.697175  742569 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 21:51:08.701298  742569 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 21:51:08.701338  742569 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 21:51:08.701372  742569 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 21:51:08.701387  742569 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 21:51:08.701404  742569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 21:51:08.701488  742569 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 21:51:08.701575  742569 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 21:51:08.701587  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /etc/ssl/certs/7135732.pem
	I1109 21:51:08.701706  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 21:51:08.711870  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:51:08.739420  742569 start.go:303] post-start completed in 159.706147ms
	I1109 21:51:08.739839  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:08.756416  742569 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/config.json ...
	I1109 21:51:08.756692  742569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 21:51:08.756738  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.774683  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.872239  742569 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 21:51:08.877839  742569 start.go:128] duration metric: createHost completed in 9.798384762s
	I1109 21:51:08.877864  742569 start.go:83] releasing machines lock for "ingress-addon-legacy-861900", held for 9.798507708s
	I1109 21:51:08.877960  742569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-861900
	I1109 21:51:08.896144  742569 ssh_runner.go:195] Run: cat /version.json
	I1109 21:51:08.896203  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.896431  742569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 21:51:08.896491  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:08.918883  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:08.925958  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:09.151387  742569 ssh_runner.go:195] Run: systemctl --version
	I1109 21:51:09.157163  742569 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 21:51:09.304640  742569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 21:51:09.310408  742569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:51:09.333841  742569 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 21:51:09.333921  742569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 21:51:09.371885  742569 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1109 21:51:09.371905  742569 start.go:472] detecting cgroup driver to use...
	I1109 21:51:09.371937  742569 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 21:51:09.371985  742569 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 21:51:09.389629  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 21:51:09.403784  742569 docker.go:203] disabling cri-docker service (if available) ...
	I1109 21:51:09.403898  742569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 21:51:09.419756  742569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 21:51:09.436395  742569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 21:51:09.527603  742569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 21:51:09.629373  742569 docker.go:219] disabling docker service ...
	I1109 21:51:09.629452  742569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 21:51:09.654697  742569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 21:51:09.669540  742569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 21:51:09.774800  742569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 21:51:09.878578  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 21:51:09.893253  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 21:51:09.912616  742569 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1109 21:51:09.912735  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.924690  742569 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 21:51:09.924814  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.937102  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.948967  742569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 21:51:09.960824  742569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 21:51:09.972144  742569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 21:51:09.985875  742569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 21:51:09.996168  742569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 21:51:10.101340  742569 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 21:51:10.234932  742569 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 21:51:10.235062  742569 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 21:51:10.239693  742569 start.go:540] Will wait 60s for crictl version
	I1109 21:51:10.239792  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:10.245139  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 21:51:10.287983  742569 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 21:51:10.288090  742569 ssh_runner.go:195] Run: crio --version
	I1109 21:51:10.332141  742569 ssh_runner.go:195] Run: crio --version
	I1109 21:51:10.380071  742569 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1109 21:51:10.381875  742569 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-861900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 21:51:10.401607  742569 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 21:51:10.406464  742569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:51:10.420155  742569 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1109 21:51:10.421626  742569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:51:10.475348  742569 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1109 21:51:10.475419  742569 ssh_runner.go:195] Run: which lz4
	I1109 21:51:10.479723  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1109 21:51:10.479845  742569 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1109 21:51:10.484056  742569 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1109 21:51:10.484088  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1109 21:51:12.514561  742569 crio.go:444] Took 2.034742 seconds to copy over tarball
	I1109 21:51:12.514636  742569 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1109 21:51:15.183705  742569 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.669040061s)
	I1109 21:51:15.183729  742569 crio.go:451] Took 2.669144 seconds to extract the tarball
	I1109 21:51:15.183739  742569 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1109 21:51:15.348173  742569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 21:51:15.386830  742569 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1109 21:51:15.386856  742569 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1109 21:51:15.386901  742569 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.386925  742569 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.387075  742569 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.387079  742569 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1109 21:51:15.387137  742569 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.387152  742569 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.387211  742569 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1109 21:51:15.387227  742569 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.388383  742569 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.388785  742569 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.389038  742569 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.389227  742569 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.389445  742569 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.389504  742569 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.389547  742569 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1109 21:51:15.389595  742569 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	W1109 21:51:15.728950  742569 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.729121  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1109 21:51:15.752629  742569 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.752819  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.769476  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1109 21:51:15.794603  742569 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1109 21:51:15.794705  742569 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.794781  742569 ssh_runner.go:195] Run: which crictl
	W1109 21:51:15.800176  742569 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.800482  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1109 21:51:15.810604  742569 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.811051  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1109 21:51:15.812120  742569 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.812311  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W1109 21:51:15.832712  742569 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.832934  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.837303  742569 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1109 21:51:15.837387  742569 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:15.837464  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.870614  742569 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1109 21:51:15.870792  742569 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1109 21:51:15.870749  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1109 21:51:15.870860  742569 ssh_runner.go:195] Run: which crictl
	W1109 21:51:15.926516  742569 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1109 21:51:15.926739  742569 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:15.977479  742569 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1109 21:51:15.977562  742569 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:15.977645  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.980981  742569 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1109 21:51:15.981062  742569 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:15.981135  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.981253  742569 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1109 21:51:15.981288  742569 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1109 21:51:15.981330  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.995689  742569 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1109 21:51:15.995769  742569 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:15.995848  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:15.995958  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1109 21:51:16.024989  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1109 21:51:16.025149  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1109 21:51:16.176849  742569 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1109 21:51:16.176896  742569 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:16.176946  742569 ssh_runner.go:195] Run: which crictl
	I1109 21:51:16.177020  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1109 21:51:16.177024  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1109 21:51:16.177091  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1109 21:51:16.177162  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1109 21:51:16.177230  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1109 21:51:16.177248  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1109 21:51:16.282115  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1109 21:51:16.282228  742569 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:16.282393  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1109 21:51:16.282283  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1109 21:51:16.282341  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1109 21:51:16.332757  742569 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1109 21:51:16.332823  742569 cache_images.go:92] LoadImages completed in 945.954656ms
	W1109 21:51:16.332904  742569 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20: no such file or directory
	I1109 21:51:16.332972  742569 ssh_runner.go:195] Run: crio config
	I1109 21:51:16.384545  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:51:16.384565  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:51:16.384595  742569 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 21:51:16.384619  742569 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-861900 NodeName:ingress-addon-legacy-861900 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1109 21:51:16.384764  742569 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-861900"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 21:51:16.384854  742569 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-861900 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 21:51:16.384923  742569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1109 21:51:16.395320  742569 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 21:51:16.395390  742569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 21:51:16.406910  742569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1109 21:51:16.427342  742569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1109 21:51:16.447689  742569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1109 21:51:16.468801  742569 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 21:51:16.473284  742569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 21:51:16.486543  742569 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900 for IP: 192.168.49.2
	I1109 21:51:16.486578  742569 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.486777  742569 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 21:51:16.486864  742569 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 21:51:16.486927  742569 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key
	I1109 21:51:16.486941  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt with IP's: []
	I1109 21:51:16.958378  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt ...
	I1109 21:51:16.958409  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: {Name:mk7aa7e55e97645ec9e7306f3f97250a72dcf0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.958628  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key ...
	I1109 21:51:16.958644  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key: {Name:mkb5a3ab7cc38c0584227f902360b3e2a653e988 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:16.958729  742569 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2
	I1109 21:51:16.958759  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 21:51:17.523161  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 ...
	I1109 21:51:17.523197  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2: {Name:mk4551e780e11b77718a4e1dcaba07e2f499c6e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:17.523388  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2 ...
	I1109 21:51:17.523404  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2: {Name:mk90613120d42f665b93b64cf37583d163f4b9a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:17.523489  742569 certs.go:337] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt
	I1109 21:51:17.523575  742569 certs.go:341] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key
	I1109 21:51:17.523637  742569 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key
	I1109 21:51:17.523655  742569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt with IP's: []
	I1109 21:51:18.352516  742569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt ...
	I1109 21:51:18.352548  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt: {Name:mkf7d0a3a5f2788403184380e5ad82ce03c7e1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:18.352735  742569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key ...
	I1109 21:51:18.352749  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key: {Name:mk74e0f683b73cde82493d7dd6ec33b6dc3d2eba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:18.352834  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 21:51:18.352855  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 21:51:18.352873  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 21:51:18.352894  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 21:51:18.352909  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 21:51:18.352920  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 21:51:18.352936  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 21:51:18.352952  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 21:51:18.353013  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem (1338 bytes)
	W1109 21:51:18.353054  742569 certs.go:433] ignoring /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573_empty.pem, impossibly tiny 0 bytes
	I1109 21:51:18.353065  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 21:51:18.353096  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 21:51:18.353128  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 21:51:18.353156  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 21:51:18.353205  742569 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 21:51:18.353239  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem -> /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.353256  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.353268  742569 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.353850  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 21:51:18.382137  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 21:51:18.409997  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 21:51:18.436959  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 21:51:18.464067  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 21:51:18.491092  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 21:51:18.518963  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 21:51:18.546248  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 21:51:18.573679  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem --> /usr/share/ca-certificates/713573.pem (1338 bytes)
	I1109 21:51:18.601235  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /usr/share/ca-certificates/7135732.pem (1708 bytes)
	I1109 21:51:18.628790  742569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 21:51:18.656502  742569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 21:51:18.676612  742569 ssh_runner.go:195] Run: openssl version
	I1109 21:51:18.683437  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/713573.pem && ln -fs /usr/share/ca-certificates/713573.pem /etc/ssl/certs/713573.pem"
	I1109 21:51:18.694541  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.698958  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.699017  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/713573.pem
	I1109 21:51:18.707268  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/713573.pem /etc/ssl/certs/51391683.0"
	I1109 21:51:18.718509  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135732.pem && ln -fs /usr/share/ca-certificates/7135732.pem /etc/ssl/certs/7135732.pem"
	I1109 21:51:18.729338  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.733817  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.733926  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135732.pem
	I1109 21:51:18.742173  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7135732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 21:51:18.753770  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 21:51:18.764674  742569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.769285  742569 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.769350  742569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 21:51:18.777787  742569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 21:51:18.789075  742569 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 21:51:18.793479  742569 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 21:51:18.793581  742569 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-861900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-861900 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:51:18.793697  742569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 21:51:18.793757  742569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 21:51:18.834384  742569 cri.go:89] found id: ""
	I1109 21:51:18.834498  742569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 21:51:18.845209  742569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 21:51:18.855608  742569 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1109 21:51:18.855700  742569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 21:51:18.865881  742569 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 21:51:18.865928  742569 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 21:51:18.921743  742569 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1109 21:51:18.921915  742569 kubeadm.go:322] [preflight] Running pre-flight checks
	I1109 21:51:18.979536  742569 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1109 21:51:18.979606  742569 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1109 21:51:18.979648  742569 kubeadm.go:322] OS: Linux
	I1109 21:51:18.979705  742569 kubeadm.go:322] CGROUPS_CPU: enabled
	I1109 21:51:18.979755  742569 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1109 21:51:18.979804  742569 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1109 21:51:18.979854  742569 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1109 21:51:18.979902  742569 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1109 21:51:18.979952  742569 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1109 21:51:19.076089  742569 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 21:51:19.076205  742569 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 21:51:19.076297  742569 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 21:51:19.308268  742569 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 21:51:19.309675  742569 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 21:51:19.309920  742569 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1109 21:51:19.414708  742569 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 21:51:19.417842  742569 out.go:204]   - Generating certificates and keys ...
	I1109 21:51:19.417943  742569 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1109 21:51:19.418036  742569 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1109 21:51:20.914684  742569 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 21:51:21.520430  742569 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1109 21:51:21.890763  742569 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1109 21:51:22.512348  742569 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1109 21:51:22.803467  742569 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1109 21:51:22.804017  742569 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-861900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:51:22.914688  742569 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1109 21:51:22.915066  742569 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-861900 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 21:51:23.296053  742569 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 21:51:23.468116  742569 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 21:51:23.812655  742569 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1109 21:51:23.813072  742569 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 21:51:24.879276  742569 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 21:51:25.188308  742569 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 21:51:25.421218  742569 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 21:51:25.868403  742569 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 21:51:25.869649  742569 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 21:51:25.871879  742569 out.go:204]   - Booting up control plane ...
	I1109 21:51:25.871970  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 21:51:25.886735  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 21:51:25.888699  742569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 21:51:25.890244  742569 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 21:51:25.893211  742569 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 21:51:38.896071  742569 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002358 seconds
	I1109 21:51:38.896192  742569 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 21:51:38.912125  742569 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 21:51:39.430085  742569 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 21:51:39.430232  742569 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-861900 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1109 21:51:39.938956  742569 kubeadm.go:322] [bootstrap-token] Using token: sofv9u.ps5ywt9mluyicgmk
	I1109 21:51:39.941443  742569 out.go:204]   - Configuring RBAC rules ...
	I1109 21:51:39.941581  742569 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 21:51:39.945514  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 21:51:39.953941  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 21:51:39.958041  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 21:51:39.962116  742569 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 21:51:39.967808  742569 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 21:51:39.983106  742569 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 21:51:40.270745  742569 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1109 21:51:40.379981  742569 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1109 21:51:40.385004  742569 kubeadm.go:322] 
	I1109 21:51:40.385078  742569 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1109 21:51:40.385090  742569 kubeadm.go:322] 
	I1109 21:51:40.385162  742569 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1109 21:51:40.385174  742569 kubeadm.go:322] 
	I1109 21:51:40.385199  742569 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1109 21:51:40.385277  742569 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 21:51:40.385337  742569 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 21:51:40.385346  742569 kubeadm.go:322] 
	I1109 21:51:40.385395  742569 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1109 21:51:40.385468  742569 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 21:51:40.385534  742569 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 21:51:40.385544  742569 kubeadm.go:322] 
	I1109 21:51:40.385623  742569 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 21:51:40.385702  742569 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1109 21:51:40.385711  742569 kubeadm.go:322] 
	I1109 21:51:40.385789  742569 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token sofv9u.ps5ywt9mluyicgmk \
	I1109 21:51:40.385893  742569 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 \
	I1109 21:51:40.386120  742569 kubeadm.go:322]     --control-plane 
	I1109 21:51:40.386135  742569 kubeadm.go:322] 
	I1109 21:51:40.386214  742569 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1109 21:51:40.386222  742569 kubeadm.go:322] 
	I1109 21:51:40.386299  742569 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token sofv9u.ps5ywt9mluyicgmk \
	I1109 21:51:40.386431  742569 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 21:51:40.389560  742569 kubeadm.go:322] W1109 21:51:18.920948    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1109 21:51:40.389796  742569 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 21:51:40.389906  742569 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 21:51:40.390056  742569 kubeadm.go:322] W1109 21:51:25.887010    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 21:51:40.390192  742569 kubeadm.go:322] W1109 21:51:25.888953    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 21:51:40.390217  742569 cni.go:84] Creating CNI manager for ""
	I1109 21:51:40.390233  742569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:51:40.392478  742569 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 21:51:40.394478  742569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 21:51:40.399865  742569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1109 21:51:40.399888  742569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 21:51:40.421714  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 21:51:40.936393  742569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 21:51:40.936537  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:40.936611  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b minikube.k8s.io/name=ingress-addon-legacy-861900 minikube.k8s.io/updated_at=2023_11_09T21_51_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.077408  742569 ops.go:34] apiserver oom_adj: -16
	I1109 21:51:41.077491  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.169448  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:41.765046  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:42.265317  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:42.765444  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:43.264564  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:43.764483  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:44.265301  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:44.765188  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:45.264504  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:45.764457  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:46.265016  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:46.765301  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:47.265014  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:47.764435  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:48.265456  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:48.765154  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:49.265192  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:49.765286  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:50.265524  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:50.765305  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:51.265209  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:51.764573  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:52.264820  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:52.764817  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:53.265309  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:53.764921  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:54.264509  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:54.765232  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:55.264745  742569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 21:51:55.400254  742569 kubeadm.go:1081] duration metric: took 14.463769128s to wait for elevateKubeSystemPrivileges.
	I1109 21:51:55.400284  742569 kubeadm.go:406] StartCluster complete in 36.606717352s
	I1109 21:51:55.400301  742569 settings.go:142] acquiring lock: {Name:mk717b4baf2280543b738622644195ea0d60d476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:55.400360  742569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:51:55.401151  742569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/kubeconfig: {Name:mk5701fd19491b0b49f183ef877286e38ea5f8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 21:51:55.401886  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:55.402502  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 21:51:55.402777  742569 config.go:182] Loaded profile config "ingress-addon-legacy-861900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1109 21:51:55.402813  742569 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1109 21:51:55.402876  742569 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-861900"
	I1109 21:51:55.402891  742569 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-861900"
	I1109 21:51:55.402945  742569 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:51:55.403414  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.403885  742569 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 21:51:55.404037  742569 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-861900"
	I1109 21:51:55.404059  742569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-861900"
	I1109 21:51:55.404378  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.467424  742569 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 21:51:55.469812  742569 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:51:55.469839  742569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 21:51:55.469907  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:55.468221  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:55.471197  742569 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-861900"
	I1109 21:51:55.471245  742569 host.go:66] Checking if "ingress-addon-legacy-861900" exists ...
	I1109 21:51:55.471733  742569 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-861900 --format={{.State.Status}}
	I1109 21:51:55.508603  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:55.519846  742569 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 21:51:55.519869  742569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 21:51:55.519929  742569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-861900
	I1109 21:51:55.541586  742569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33690 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/ingress-addon-legacy-861900/id_rsa Username:docker}
	I1109 21:51:55.591319  742569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 21:51:55.593095  742569 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-861900" context rescaled to 1 replicas
	I1109 21:51:55.593134  742569 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 21:51:55.595281  742569 out.go:177] * Verifying Kubernetes components...
	I1109 21:51:55.597815  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:51:55.727386  742569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 21:51:55.810877  742569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 21:51:56.161052  742569 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 21:51:56.161718  742569 kapi.go:59] client config for ingress-addon-legacy-861900: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 21:51:56.161977  742569 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-861900" to be "Ready" ...
	I1109 21:51:56.297125  742569 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 21:51:56.299632  742569 addons.go:502] enable addons completed in 896.808823ms: enabled=[storage-provisioner default-storageclass]
	I1109 21:51:58.182169  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:00.679679  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:02.680215  742569 node_ready.go:58] node "ingress-addon-legacy-861900" has status "Ready":"False"
	I1109 21:52:04.180104  742569 node_ready.go:49] node "ingress-addon-legacy-861900" has status "Ready":"True"
	I1109 21:52:04.180129  742569 node_ready.go:38] duration metric: took 8.018130775s waiting for node "ingress-addon-legacy-861900" to be "Ready" ...
	I1109 21:52:04.180140  742569 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:52:04.187621  742569 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:06.204386  742569 pod_ready.go:102] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"False"
	I1109 21:52:08.206235  742569 pod_ready.go:102] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"False"
	I1109 21:52:10.203876  742569 pod_ready.go:92] pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.203902  742569 pod_ready.go:81] duration metric: took 6.016247789s waiting for pod "coredns-66bff467f8-xvlpj" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.203914  742569 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.208348  742569 pod_ready.go:92] pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.208369  742569 pod_ready.go:81] duration metric: took 4.447883ms waiting for pod "etcd-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.208382  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.212566  742569 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.212589  742569 pod_ready.go:81] duration metric: took 4.199769ms waiting for pod "kube-apiserver-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.212600  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.217160  742569 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.217181  742569 pod_ready.go:81] duration metric: took 4.573873ms waiting for pod "kube-controller-manager-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.217192  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzpwp" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.221490  742569 pod_ready.go:92] pod "kube-proxy-hzpwp" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.221513  742569 pod_ready.go:81] duration metric: took 4.314428ms waiting for pod "kube-proxy-hzpwp" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.221526  742569 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.398836  742569 request.go:629] Waited for 177.227916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-861900
	I1109 21:52:10.598773  742569 request.go:629] Waited for 197.305791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-861900
	I1109 21:52:10.601409  742569 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace has status "Ready":"True"
	I1109 21:52:10.601433  742569 pod_ready.go:81] duration metric: took 379.899051ms waiting for pod "kube-scheduler-ingress-addon-legacy-861900" in "kube-system" namespace to be "Ready" ...
	I1109 21:52:10.601446  742569 pod_ready.go:38] duration metric: took 6.421289072s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 21:52:10.601460  742569 api_server.go:52] waiting for apiserver process to appear ...
	I1109 21:52:10.601552  742569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 21:52:10.614287  742569 api_server.go:72] duration metric: took 15.021115193s to wait for apiserver process to appear ...
	I1109 21:52:10.614320  742569 api_server.go:88] waiting for apiserver healthz status ...
	I1109 21:52:10.614338  742569 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 21:52:10.623265  742569 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 21:52:10.624104  742569 api_server.go:141] control plane version: v1.18.20
	I1109 21:52:10.624127  742569 api_server.go:131] duration metric: took 9.799984ms to wait for apiserver health ...
	I1109 21:52:10.624135  742569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 21:52:10.799484  742569 request.go:629] Waited for 175.290275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1109 21:52:10.805318  742569 system_pods.go:59] 8 kube-system pods found
	I1109 21:52:10.805357  742569 system_pods.go:61] "coredns-66bff467f8-xvlpj" [21a49005-d70f-4ed3-b4ee-c152858ec6bb] Running
	I1109 21:52:10.805364  742569 system_pods.go:61] "etcd-ingress-addon-legacy-861900" [0e493dc6-a6ba-470f-bb52-1de4dffd8513] Running
	I1109 21:52:10.805370  742569 system_pods.go:61] "kindnet-qmz79" [5c7f9d10-cffa-44a4-ab40-247ae020d804] Running
	I1109 21:52:10.805375  742569 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-861900" [f983107f-d0be-4cc0-aea8-9c14d4795bcd] Running
	I1109 21:52:10.805381  742569 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-861900" [eed21be3-48a7-4bce-8725-c99487aacb55] Running
	I1109 21:52:10.805385  742569 system_pods.go:61] "kube-proxy-hzpwp" [9ef89c7b-9e45-4303-a315-31aa5a71b12a] Running
	I1109 21:52:10.805390  742569 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-861900" [81600c7c-dac1-42cc-aa86-7d5dd3d7eb03] Running
	I1109 21:52:10.805395  742569 system_pods.go:61] "storage-provisioner" [d1a286b9-e693-4d7c-88d0-ab36ed6c87a8] Running
	I1109 21:52:10.805406  742569 system_pods.go:74] duration metric: took 181.265332ms to wait for pod list to return data ...
	I1109 21:52:10.805416  742569 default_sa.go:34] waiting for default service account to be created ...
	I1109 21:52:10.998774  742569 request.go:629] Waited for 193.286254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1109 21:52:11.001774  742569 default_sa.go:45] found service account: "default"
	I1109 21:52:11.001805  742569 default_sa.go:55] duration metric: took 196.381865ms for default service account to be created ...
	I1109 21:52:11.001816  742569 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 21:52:11.199217  742569 request.go:629] Waited for 197.331898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1109 21:52:11.205095  742569 system_pods.go:86] 8 kube-system pods found
	I1109 21:52:11.205125  742569 system_pods.go:89] "coredns-66bff467f8-xvlpj" [21a49005-d70f-4ed3-b4ee-c152858ec6bb] Running
	I1109 21:52:11.205135  742569 system_pods.go:89] "etcd-ingress-addon-legacy-861900" [0e493dc6-a6ba-470f-bb52-1de4dffd8513] Running
	I1109 21:52:11.205143  742569 system_pods.go:89] "kindnet-qmz79" [5c7f9d10-cffa-44a4-ab40-247ae020d804] Running
	I1109 21:52:11.205148  742569 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-861900" [f983107f-d0be-4cc0-aea8-9c14d4795bcd] Running
	I1109 21:52:11.205183  742569 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-861900" [eed21be3-48a7-4bce-8725-c99487aacb55] Running
	I1109 21:52:11.205196  742569 system_pods.go:89] "kube-proxy-hzpwp" [9ef89c7b-9e45-4303-a315-31aa5a71b12a] Running
	I1109 21:52:11.205201  742569 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-861900" [81600c7c-dac1-42cc-aa86-7d5dd3d7eb03] Running
	I1109 21:52:11.205206  742569 system_pods.go:89] "storage-provisioner" [d1a286b9-e693-4d7c-88d0-ab36ed6c87a8] Running
	I1109 21:52:11.205212  742569 system_pods.go:126] duration metric: took 203.390352ms to wait for k8s-apps to be running ...
	I1109 21:52:11.205224  742569 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 21:52:11.205293  742569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 21:52:11.219193  742569 system_svc.go:56] duration metric: took 13.960189ms WaitForService to wait for kubelet.
	I1109 21:52:11.219220  742569 kubeadm.go:581] duration metric: took 15.626055001s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 21:52:11.219240  742569 node_conditions.go:102] verifying NodePressure condition ...
	I1109 21:52:11.399644  742569 request.go:629] Waited for 180.306639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1109 21:52:11.403829  742569 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 21:52:11.403863  742569 node_conditions.go:123] node cpu capacity is 2
	I1109 21:52:11.403877  742569 node_conditions.go:105] duration metric: took 184.631504ms to run NodePressure ...
	I1109 21:52:11.403905  742569 start.go:228] waiting for startup goroutines ...
	I1109 21:52:11.403919  742569 start.go:233] waiting for cluster config update ...
	I1109 21:52:11.403942  742569 start.go:242] writing updated cluster config ...
	I1109 21:52:11.404224  742569 ssh_runner.go:195] Run: rm -f paused
	I1109 21:52:11.465367  742569 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1109 21:52:11.467894  742569 out.go:177] 
	W1109 21:52:11.470209  742569 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1109 21:52:11.472261  742569 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1109 21:52:11.474414  742569 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-861900" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 09 21:58:18 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:18.741790956Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1f048d6e-6b9b-4719-87b1-fd2475d484ea name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:23 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:23.741857038Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6603a1f6-8244-4f02-928b-655cec8cd826 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:23 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:23.742125821Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=6603a1f6-8244-4f02-928b-655cec8cd826 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:30 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:30.741572311Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=d1e052ab-1462-4776-8890-9e37852701b6 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:31 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:31.741702921Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=19bd7360-342b-4c20-9971-3d94ab536efa name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:31 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:31.742018087Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=19bd7360-342b-4c20-9971-3d94ab536efa name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:36.741587487Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=310c404d-0234-4d89-9816-826a0de61e7a name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:36.741863440Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=310c404d-0234-4d89-9816-826a0de61e7a name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:36.743401656Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=bf7ff116-3fb4-45bd-af47-674f2efb76bb name=/runtime.v1alpha2.ImageService/PullImage
	Nov 09 21:58:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:36.745625521Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:58:43 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:43.742090498Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=ad1649d0-4ba5-4fd4-8e43-b76bb076240d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:45 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:45.741703183Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b680de52-0155-4aa7-98ac-c81a255ecc4b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:45 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:45.741966787Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b680de52-0155-4aa7-98ac-c81a255ecc4b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:56 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:56.741536708Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=c22b2942-80dc-489c-9254-d793ea6f317b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:58 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:58.741552185Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ad3608da-1c86-401f-b009-306890bc6d7c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:58:58 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:58:58.741827097Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ad3608da-1c86-401f-b009-306890bc6d7c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:09 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:09.741591695Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=ff68527a-7875-47b0-80d3-06907e750577 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:09 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:09.741884592Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=ff68527a-7875-47b0-80d3-06907e750577 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:10 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:10.741499254Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=62065f93-9a18-4bd1-a643-c86c1bc354c5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:21.741648311Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=1d279b26-63bb-474b-8adf-78f094332903 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:21 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:21.741911038Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1d279b26-63bb-474b-8adf-78f094332903 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:24 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:24.741620209Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=e9cc6659-eed1-4902-9053-1012ecde2c06 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:36.741519474Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=b8f67cdd-b679-4e4b-97b0-b612b6a0038b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:36 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:36.741797217Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=b8f67cdd-b679-4e4b-97b0-b612b6a0038b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 09 21:59:38 ingress-addon-legacy-861900 crio[899]: time="2023-11-09 21:59:38.741600663Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=a2ba162b-7e17-43ea-aa3b-8969871cec87 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8fbecc9c3f547       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   37d1ea607b8b5       storage-provisioner
	2376cb1b3a6b6       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   c08abe0554ec6       coredns-66bff467f8-xvlpj
	12c0413d19e2a       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                7 minutes ago       Running             kindnet-cni               0                   2c2e4cab23364       kindnet-qmz79
	6e4b6f3bb3bee       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  7 minutes ago       Running             kube-proxy                0                   2e40e19b9b394       kube-proxy-hzpwp
	4ff81395ca098       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   4b8298eaa7ed3       kube-scheduler-ingress-addon-legacy-861900
	89853e1bb576e       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   f9b15b2de5254       etcd-ingress-addon-legacy-861900
	7e2e0409daae4       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   61596e31e7a39       kube-controller-manager-ingress-addon-legacy-861900
	e7bf2710aeb7b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   9c199d47751a8       kube-apiserver-ingress-addon-legacy-861900
	
	* 
	* ==> coredns [2376cb1b3a6b6813a5d2302411ed07beeb5f8e1f6497ff21408c390d11068428] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:48639 - 30310 "HINFO IN 41319439878355309.3327441200404581037. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.023292771s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-861900
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-861900
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b
	                    minikube.k8s.io/name=ingress-addon-legacy-861900
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_09T21_51_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 21:51:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-861900
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 21:59:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:51:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 21:57:13 +0000   Thu, 09 Nov 2023 21:52:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-861900
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 17ba4a3fdd1b457bacd86c8440d8632f
	  System UUID:                994f0811-8333-4938-90be-1fff4e2582ae
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-ccr5n                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  ingress-nginx               ingress-nginx-admission-patch-rgzmj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m34s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-dc48v              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m34s
	  kube-system                 coredns-66bff467f8-xvlpj                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m52s
	  kube-system                 etcd-ingress-addon-legacy-861900                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kindnet-qmz79                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m52s
	  kube-system                 kube-apiserver-ingress-addon-legacy-861900             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-861900    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-hzpwp                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m52s
	  kube-system                 kube-scheduler-ingress-addon-legacy-861900             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m17s (x5 over 8m17s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m17s (x5 over 8m17s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s (x4 over 8m17s)  kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m3s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m3s                   kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m3s                   kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m3s                   kubelet     Node ingress-addon-legacy-861900 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m50s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m43s                  kubelet     Node ingress-addon-legacy-861900 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001047] FS-Cache: O-key=[8] '04613b0000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000009519ed76
	[  +0.001234] FS-Cache: N-key=[8] '04613b0000000000'
	[  +1.883823] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000005eb91895
	[  +0.001121] FS-Cache: O-key=[8] '03613b0000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000afe277c2
	[  +0.001058] FS-Cache: N-key=[8] '03613b0000000000'
	[  +0.314346] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000000067384c
	[  +0.001081] FS-Cache: O-key=[8] '09613b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000004e0bd103
	[  +0.001050] FS-Cache: N-key=[8] '09613b0000000000'
	[  +3.214848] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=000000004b6c5454{9P.session} n=0000000040db7851
	[  +0.001155] FS-Cache: O-key=[10] '34323938393639353234'
	[  +0.000778] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=000000004b6c5454{9P.session} n=00000000aa25bbf1
	[  +0.001089] FS-Cache: N-key=[10] '34323938393639353234'
	
	* 
	* ==> etcd [89853e1bb576e1a9e0b434efb8cb619e1e4814816a36c27eee433f8f804af1a9] <==
	* raft2023/11/09 21:51:31 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/09 21:51:31 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/09 21:51:31 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/09 21:51:31 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-09 21:51:31.918658 W | auth: simple token is not cryptographically signed
	2023-11-09 21:51:32.002451 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-09 21:51:32.038429 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-09 21:51:32.054515 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-09 21:51:32.266436 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-09 21:51:32.294332 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-09 21:51:32.322334 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/09 21:51:32 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/09 21:51:32 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-09 21:51:32.908824 I | etcdserver: published {Name:ingress-addon-legacy-861900 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-09 21:51:32.908874 I | embed: ready to serve client requests
	2023-11-09 21:51:32.919967 I | embed: ready to serve client requests
	2023-11-09 21:51:33.026453 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-09 21:51:33.046429 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-09 21:51:33.062288 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-09 21:51:33.062384 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-09 21:51:33.067667 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  21:59:47 up  4:42,  0 users,  load average: 0.36, 0.40, 0.79
	Linux ingress-addon-legacy-861900 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [12c0413d19e2af170e00351d7872dbe4a650e36feb06b0bbe6b127a217ebae87] <==
	* I1109 21:57:38.415184       1 main.go:227] handling current node
	I1109 21:57:48.419130       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:48.419157       1 main.go:227] handling current node
	I1109 21:57:58.422362       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:57:58.422390       1 main.go:227] handling current node
	I1109 21:58:08.433216       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:08.433245       1 main.go:227] handling current node
	I1109 21:58:18.436874       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:18.436977       1 main.go:227] handling current node
	I1109 21:58:28.440688       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:28.440716       1 main.go:227] handling current node
	I1109 21:58:38.444617       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:38.444643       1 main.go:227] handling current node
	I1109 21:58:48.456464       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:48.456490       1 main.go:227] handling current node
	I1109 21:58:58.459694       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:58:58.459722       1 main.go:227] handling current node
	I1109 21:59:08.463170       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:59:08.463198       1 main.go:227] handling current node
	I1109 21:59:18.466264       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:59:18.466291       1 main.go:227] handling current node
	I1109 21:59:28.469706       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:59:28.469733       1 main.go:227] handling current node
	I1109 21:59:38.481386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1109 21:59:38.481412       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [e7bf2710aeb7bc4b1cd8b33e83d715899c5277475057a2ba6df96976ef84be72] <==
	* E1109 21:51:37.108556       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1109 21:51:37.214411       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1109 21:51:37.214517       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	I1109 21:51:37.289846       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1109 21:51:37.293027       1 cache.go:39] Caches are synced for autoregister controller
	I1109 21:51:37.293343       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 21:51:37.317978       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1109 21:51:37.383193       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 21:51:38.082133       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1109 21:51:38.082167       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1109 21:51:38.089467       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1109 21:51:38.094420       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1109 21:51:38.094507       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1109 21:51:38.489536       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 21:51:38.526376       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1109 21:51:38.588184       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1109 21:51:38.589188       1 controller.go:609] quota admission added evaluator for: endpoints
	I1109 21:51:38.595275       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 21:51:39.515633       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1109 21:51:40.253705       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1109 21:51:40.349020       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1109 21:51:43.660838       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 21:51:54.933058       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1109 21:51:54.949415       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1109 21:52:12.330201       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [7e2e0409daae43d6039fc6b745df10ddcf31675c7ccec53ae59db703d6f88eec] <==
	* W1109 21:51:55.025406       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-861900. Assuming now as a timestamp.
	I1109 21:51:55.025446       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1109 21:51:55.025737       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1109 21:51:55.027611       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-861900", UID:"13083f17-da80-4417-be3a-db6cdc777fb1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-861900 event: Registered Node ingress-addon-legacy-861900 in Controller
	I1109 21:51:55.076310       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8870801c-b660-481c-9652-ca7ded0789e5", APIVersion:"apps/v1", ResourceVersion:"312", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-s6fzm
	E1109 21:51:55.099527       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"6929bc4c-7e8a-424a-86da-6fd51fdfbd76", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835163500, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001794a20), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4001794a40)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001794a60), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001752f00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4001794a80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001794aa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001794ae0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014f7040), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000fc3878), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000899c70), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000de6598)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000fc38c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I1109 21:51:55.324054       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1109 21:51:55.330288       1 shared_informer.go:230] Caches are synced for resource quota 
	I1109 21:51:55.487003       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1109 21:51:55.487022       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1109 21:51:55.487287       1 shared_informer.go:230] Caches are synced for attach detach 
	I1109 21:51:55.558253       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1109 21:51:55.566877       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"bcecd9c6-e2e7-4a60-957b-9e58f2a6b868", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1109 21:51:55.579454       1 shared_informer.go:230] Caches are synced for PV protection 
	I1109 21:51:55.579493       1 shared_informer.go:230] Caches are synced for expand 
	I1109 21:51:55.579553       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1109 21:51:55.606927       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"8870801c-b660-481c-9652-ca7ded0789e5", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-s6fzm
	I1109 21:51:56.378810       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1109 21:51:56.378848       1 shared_informer.go:230] Caches are synced for resource quota 
	I1109 21:52:05.025977       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1109 21:52:12.339176       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3ab493b9-13e2-4968-9d8d-fda76c205949", APIVersion:"apps/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1109 21:52:12.364238       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"967a26a5-b6bd-4d1b-9bfb-025c71119f27", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ccr5n
	I1109 21:52:12.383670       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0f32a6a5-086b-4551-a049-bfbe5bc5fd27", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-dc48v
	I1109 21:52:12.401494       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"d224efe5-ca39-4e29-aa76-38bfc3ee081b", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rgzmj
	
	* 
	* ==> kube-proxy [6e4b6f3bb3bee815134504a4788b7def949611905937dfa311e8debaec65eba1] <==
	* W1109 21:51:56.246965       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1109 21:51:56.282103       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1109 21:51:56.282255       1 server_others.go:186] Using iptables Proxier.
	I1109 21:51:56.282742       1 server.go:583] Version: v1.18.20
	I1109 21:51:56.290244       1 config.go:133] Starting endpoints config controller
	I1109 21:51:56.290274       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1109 21:51:56.291757       1 config.go:315] Starting service config controller
	I1109 21:51:56.291782       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1109 21:51:56.390405       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1109 21:51:56.391918       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4ff81395ca0988ad3efbbe16de8845b0b6172216dc3f75ea574f05562d6683e9] <==
	* I1109 21:51:37.291027       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:51:37.291074       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 21:51:37.291117       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1109 21:51:37.315029       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 21:51:37.315376       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1109 21:51:37.315499       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 21:51:37.315613       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 21:51:37.315753       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1109 21:51:37.315883       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1109 21:51:37.315987       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 21:51:37.316071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:51:37.316162       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 21:51:37.316251       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1109 21:51:37.322151       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 21:51:37.322340       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 21:51:38.128122       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 21:51:38.190921       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 21:51:38.241630       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 21:51:38.260860       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 21:51:38.274952       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1109 21:51:38.514003       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1109 21:51:41.191263       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1109 21:51:55.113768       1 factory.go:503] pod: kube-system/coredns-66bff467f8-xvlpj is already present in the active queue
	E1109 21:51:55.143936       1 factory.go:503] pod: kube-system/coredns-66bff467f8-s6fzm is already present in the active queue
	E1109 21:51:56.321820       1 factory.go:503] pod: kube-system/storage-provisioner is already present in unschedulable queue
	
	* 
	* ==> kubelet <==
	* Nov 09 21:58:47 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:47.741456    1606 pod_workers.go:191] Error syncing pod 6d344081-ebfd-49f4-a545-72ba675e86e7 ("ingress-nginx-controller-7fcf777cb7-dc48v_ingress-nginx(6d344081-ebfd-49f4-a545-72ba675e86e7)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-rkb49]: timed out waiting for the condition
	Nov 09 21:58:56 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:56.741867    1606 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:58:56 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:56.741913    1606 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:58:56 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:56.741955    1606 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:58:56 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:56.741986    1606 pod_workers.go:191] Error syncing pod a2348031-b285-41fe-ba11-852e16658474 ("kube-ingress-dns-minikube_kube-system(a2348031-b285-41fe-ba11-852e16658474)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 09 21:58:58 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:58:58.742039    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:59:09 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:09.742015    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:59:10 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:10.741825    1606 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:10 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:10.741869    1606 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:10 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:10.741935    1606 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:10 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:10.741965    1606 pod_workers.go:191] Error syncing pod a2348031-b285-41fe-ba11-852e16658474 ("kube-ingress-dns-minikube_kube-system(a2348031-b285-41fe-ba11-852e16658474)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 09 21:59:21 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:21.742047    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:59:24 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:24.741955    1606 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:24 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:24.741996    1606 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:24 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:24.742043    1606 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:24 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:24.742072    1606 pod_workers.go:191] Error syncing pod a2348031-b285-41fe-ba11-852e16658474 ("kube-ingress-dns-minikube_kube-system(a2348031-b285-41fe-ba11-852e16658474)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 09 21:59:36 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:36.742017    1606 pod_workers.go:191] Error syncing pod 0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef ("ingress-nginx-admission-create-ccr5n_ingress-nginx(0b866e20-eb9e-4677-a5b5-ab4b5b7cbaef)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 09 21:59:37 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:37.229634    1606 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:59:37 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:37.229701    1606 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:59:37 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:37.229767    1606 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 09 21:59:37 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:37.229803    1606 pod_workers.go:191] Error syncing pod 138b5bae-7db6-48b0-ba3c-c56c177dbb5f ("ingress-nginx-admission-patch-rgzmj_ingress-nginx(138b5bae-7db6-48b0-ba3c-c56c177dbb5f)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 09 21:59:38 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:38.742122    1606 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:38 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:38.742178    1606 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:38 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:38.742250    1606 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 09 21:59:38 ingress-addon-legacy-861900 kubelet[1606]: E1109 21:59:38.742282    1606 pod_workers.go:191] Error syncing pod a2348031-b285-41fe-ba11-852e16658474 ("kube-ingress-dns-minikube_kube-system(a2348031-b285-41fe-ba11-852e16658474)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	
	* 
	* ==> storage-provisioner [8fbecc9c3f5472a4700e41a971d8b829446928fdb54c4f4884443548babded41] <==
	* I1109 21:52:08.634347       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 21:52:08.649907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 21:52:08.651372       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 21:52:08.657253       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 21:52:08.657558       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6!
	I1109 21:52:08.658825       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fe3ef460-1f88-4dbd-9f61-e631a6d9e3ba", APIVersion:"v1", ResourceVersion:"434", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6 became leader
	I1109 21:52:08.758519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-861900_6f8855dd-d2d9-4c4c-81fe-ee80884e23a6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-861900 -n ingress-addon-legacy-861900
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-861900 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v kube-ingress-dns-minikube: exit status 1 (94.440151ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ccr5n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-rgzmj" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-dc48v" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-861900 describe pod ingress-nginx-admission-create-ccr5n ingress-nginx-admission-patch-rgzmj ingress-nginx-controller-7fcf777cb7-dc48v kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- sh -c "ping -c 1 192.168.58.1": exit status 1 (251.825357ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-76fbj): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- sh -c "ping -c 1 192.168.58.1": exit status 1 (257.597072ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-zwn9f): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-833232
helpers_test.go:235: (dbg) docker inspect multinode-833232:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6",
	        "Created": "2023-11-09T22:06:12.266656277Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 778346,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T22:06:12.584641686Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/hosts",
	        "LogPath": "/var/lib/docker/containers/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6-json.log",
	        "Name": "/multinode-833232",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-833232:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-833232",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f6ce6c61579a3cc07ce5d0d79511767dfc6b550fae60195b44195ca1a7a15f49-init/diff:/var/lib/docker/overlay2/7d8c4fc646533218e970cbbc2fae53265551a122428b3ce7f5ec8807d1cc9c68/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f6ce6c61579a3cc07ce5d0d79511767dfc6b550fae60195b44195ca1a7a15f49/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f6ce6c61579a3cc07ce5d0d79511767dfc6b550fae60195b44195ca1a7a15f49/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f6ce6c61579a3cc07ce5d0d79511767dfc6b550fae60195b44195ca1a7a15f49/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-833232",
	                "Source": "/var/lib/docker/volumes/multinode-833232/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-833232",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-833232",
	                "name.minikube.sigs.k8s.io": "multinode-833232",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f169380779406db880fac84896cc1de2c28eb9d04007bdafaebd83cf56c031ba",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f16938077940",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-833232": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "bc2ae93f7ba6",
	                        "multinode-833232"
	                    ],
	                    "NetworkID": "44f783ceb53c7cf4ff69e59f31b7715969ac107c24d5a673d310a23211c973a3",
	                    "EndpointID": "4264c6b6e26e6b98efb0a7f94528f1c907d271b946864786eae1bb0c704e9655",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-833232 -n multinode-833232
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-833232 logs -n 25: (1.646289887s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-976187                           | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:05 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-976187 ssh -- ls                    | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:05 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-974382                           | mount-start-1-974382 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:05 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-976187 ssh -- ls                    | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:05 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-976187                           | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:05 UTC |
	| start   | -p mount-start-2-976187                           | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:05 UTC | 09 Nov 23 22:06 UTC |
	| ssh     | mount-start-2-976187 ssh -- ls                    | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:06 UTC | 09 Nov 23 22:06 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-976187                           | mount-start-2-976187 | jenkins | v1.32.0 | 09 Nov 23 22:06 UTC | 09 Nov 23 22:06 UTC |
	| delete  | -p mount-start-1-974382                           | mount-start-1-974382 | jenkins | v1.32.0 | 09 Nov 23 22:06 UTC | 09 Nov 23 22:06 UTC |
	| start   | -p multinode-833232                               | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:06 UTC | 09 Nov 23 22:08 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- apply -f                   | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- rollout                    | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- get pods -o                | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- get pods -o                | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-76fbj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-zwn9f --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-76fbj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-zwn9f --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-76fbj -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-zwn9f -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- get pods -o                | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-76fbj                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC |                     |
	|         | busybox-5bc68d56bd-76fbj -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC | 09 Nov 23 22:08 UTC |
	|         | busybox-5bc68d56bd-zwn9f                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-833232 -- exec                       | multinode-833232     | jenkins | v1.32.0 | 09 Nov 23 22:08 UTC |                     |
	|         | busybox-5bc68d56bd-zwn9f -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 22:06:06
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 22:06:06.782388  777892 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:06:06.782586  777892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:06:06.782597  777892 out.go:309] Setting ErrFile to fd 2...
	I1109 22:06:06.782603  777892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:06:06.782838  777892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:06:06.783219  777892 out.go:303] Setting JSON to false
	I1109 22:06:06.784161  777892 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17317,"bootTime":1699550250,"procs":280,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 22:06:06.784229  777892 start.go:138] virtualization:  
	I1109 22:06:06.786609  777892 out.go:177] * [multinode-833232] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 22:06:06.789097  777892 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 22:06:06.789248  777892 notify.go:220] Checking for updates...
	I1109 22:06:06.793153  777892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 22:06:06.795434  777892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:06:06.797598  777892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 22:06:06.799528  777892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 22:06:06.801356  777892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 22:06:06.803454  777892 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 22:06:06.826806  777892 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 22:06:06.826911  777892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:06:06.906023  777892 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 22:06:06.895927699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:06:06.906161  777892 docker.go:295] overlay module found
	I1109 22:06:06.908107  777892 out.go:177] * Using the docker driver based on user configuration
	I1109 22:06:06.910074  777892 start.go:298] selected driver: docker
	I1109 22:06:06.910090  777892 start.go:902] validating driver "docker" against <nil>
	I1109 22:06:06.910104  777892 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 22:06:06.910767  777892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:06:06.975309  777892 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-09 22:06:06.966582288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:06:06.975478  777892 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1109 22:06:06.975699  777892 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 22:06:06.978277  777892 out.go:177] * Using Docker driver with root privileges
	I1109 22:06:06.980352  777892 cni.go:84] Creating CNI manager for ""
	I1109 22:06:06.980368  777892 cni.go:136] 0 nodes found, recommending kindnet
	I1109 22:06:06.980377  777892 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 22:06:06.980393  777892 start_flags.go:323] config:
	{Name:multinode-833232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 22:06:06.982500  777892 out.go:177] * Starting control plane node multinode-833232 in cluster multinode-833232
	I1109 22:06:06.984633  777892 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 22:06:06.986547  777892 out.go:177] * Pulling base image ...
	I1109 22:06:06.988534  777892 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 22:06:06.988584  777892 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1109 22:06:06.988598  777892 cache.go:56] Caching tarball of preloaded images
	I1109 22:06:06.988676  777892 preload.go:174] Found /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 22:06:06.988693  777892 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1109 22:06:06.989047  777892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json ...
	I1109 22:06:06.989074  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json: {Name:mk5f363ea9045cb6b24d1849716bccdcad79449d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:06.989227  777892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 22:06:07.006385  777892 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1109 22:06:07.006411  777892 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1109 22:06:07.006430  777892 cache.go:194] Successfully downloaded all kic artifacts
	I1109 22:06:07.006478  777892 start.go:365] acquiring machines lock for multinode-833232: {Name:mkae629911c147c534aa0aacfab81d4211483993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:06:07.006587  777892 start.go:369] acquired machines lock for "multinode-833232" in 87.933µs
	I1109 22:06:07.006619  777892 start.go:93] Provisioning new machine with config: &{Name:multinode-833232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 22:06:07.006703  777892 start.go:125] createHost starting for "" (driver="docker")
	I1109 22:06:07.010899  777892 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 22:06:07.011162  777892 start.go:159] libmachine.API.Create for "multinode-833232" (driver="docker")
	I1109 22:06:07.011216  777892 client.go:168] LocalClient.Create starting
	I1109 22:06:07.011284  777892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 22:06:07.011322  777892 main.go:141] libmachine: Decoding PEM data...
	I1109 22:06:07.011343  777892 main.go:141] libmachine: Parsing certificate...
	I1109 22:06:07.011423  777892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 22:06:07.011448  777892 main.go:141] libmachine: Decoding PEM data...
	I1109 22:06:07.011468  777892 main.go:141] libmachine: Parsing certificate...
	I1109 22:06:07.011816  777892 cli_runner.go:164] Run: docker network inspect multinode-833232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 22:06:07.028261  777892 cli_runner.go:211] docker network inspect multinode-833232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 22:06:07.028335  777892 network_create.go:281] running [docker network inspect multinode-833232] to gather additional debugging logs...
	I1109 22:06:07.028357  777892 cli_runner.go:164] Run: docker network inspect multinode-833232
	W1109 22:06:07.044688  777892 cli_runner.go:211] docker network inspect multinode-833232 returned with exit code 1
	I1109 22:06:07.044720  777892 network_create.go:284] error running [docker network inspect multinode-833232]: docker network inspect multinode-833232: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-833232 not found
	I1109 22:06:07.044732  777892 network_create.go:286] output of [docker network inspect multinode-833232]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-833232 not found
	
	** /stderr **
	I1109 22:06:07.044819  777892 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:06:07.061871  777892 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c8ab7f0d0118 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:72:9b:ff:43} reservation:<nil>}
	I1109 22:06:07.062205  777892 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002511410}
	I1109 22:06:07.062228  777892 network_create.go:124] attempt to create docker network multinode-833232 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1109 22:06:07.062294  777892 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-833232 multinode-833232
	I1109 22:06:07.129453  777892 network_create.go:108] docker network multinode-833232 192.168.58.0/24 created
	I1109 22:06:07.129485  777892 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-833232" container
	I1109 22:06:07.129572  777892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 22:06:07.146027  777892 cli_runner.go:164] Run: docker volume create multinode-833232 --label name.minikube.sigs.k8s.io=multinode-833232 --label created_by.minikube.sigs.k8s.io=true
	I1109 22:06:07.168624  777892 oci.go:103] Successfully created a docker volume multinode-833232
	I1109 22:06:07.168723  777892 cli_runner.go:164] Run: docker run --rm --name multinode-833232-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-833232 --entrypoint /usr/bin/test -v multinode-833232:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1109 22:06:07.765872  777892 oci.go:107] Successfully prepared a docker volume multinode-833232
	I1109 22:06:07.765927  777892 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 22:06:07.765952  777892 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 22:06:07.766029  777892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-833232:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 22:06:12.186893  777892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-833232:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.420819181s)
	I1109 22:06:12.186933  777892 kic.go:203] duration metric: took 4.420979 seconds to extract preloaded images to volume
	W1109 22:06:12.187065  777892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 22:06:12.187234  777892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 22:06:12.251468  777892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-833232 --name multinode-833232 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-833232 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-833232 --network multinode-833232 --ip 192.168.58.2 --volume multinode-833232:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1109 22:06:12.595137  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Running}}
	I1109 22:06:12.616243  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:12.639343  777892 cli_runner.go:164] Run: docker exec multinode-833232 stat /var/lib/dpkg/alternatives/iptables
	I1109 22:06:12.714437  777892 oci.go:144] the created container "multinode-833232" has a running status.
	I1109 22:06:12.714465  777892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa...
	I1109 22:06:13.135858  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 22:06:13.135938  777892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 22:06:13.169770  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:13.202921  777892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 22:06:13.202944  777892 kic_runner.go:114] Args: [docker exec --privileged multinode-833232 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 22:06:13.299001  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:13.330802  777892 machine.go:88] provisioning docker machine ...
	I1109 22:06:13.330839  777892 ubuntu.go:169] provisioning hostname "multinode-833232"
	I1109 22:06:13.330952  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:13.361413  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:06:13.361867  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1109 22:06:13.361888  777892 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-833232 && echo "multinode-833232" | sudo tee /etc/hostname
	I1109 22:06:13.573704  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833232
	
	I1109 22:06:13.573806  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:13.601930  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:06:13.602350  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1109 22:06:13.602397  777892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-833232' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-833232/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-833232' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:06:13.747724  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:06:13.747791  777892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:06:13.747830  777892 ubuntu.go:177] setting up certificates
	I1109 22:06:13.747854  777892 provision.go:83] configureAuth start
	I1109 22:06:13.747938  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232
	I1109 22:06:13.771969  777892 provision.go:138] copyHostCerts
	I1109 22:06:13.772054  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:06:13.772088  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:06:13.772095  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:06:13.772159  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:06:13.772224  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:06:13.772242  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:06:13.772247  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:06:13.772271  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:06:13.772313  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:06:13.772330  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:06:13.772334  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:06:13.772357  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:06:13.772400  777892 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.multinode-833232 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-833232]
	I1109 22:06:14.593746  777892 provision.go:172] copyRemoteCerts
	I1109 22:06:14.593818  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:06:14.593890  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:14.615251  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:14.717721  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 22:06:14.717780  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 22:06:14.746472  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 22:06:14.746531  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:06:14.775163  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 22:06:14.775230  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 22:06:14.804006  777892 provision.go:86] duration metric: configureAuth took 1.056128251s
	I1109 22:06:14.804035  777892 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:06:14.804234  777892 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:06:14.804347  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:14.823142  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:06:14.823575  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33750 <nil> <nil>}
	I1109 22:06:14.823599  777892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:06:15.085842  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:06:15.085872  777892 machine.go:91] provisioned docker machine in 1.755050344s
	I1109 22:06:15.085890  777892 client.go:171] LocalClient.Create took 8.074663581s
	I1109 22:06:15.085906  777892 start.go:167] duration metric: libmachine.API.Create for "multinode-833232" took 8.074743286s
	I1109 22:06:15.085927  777892 start.go:300] post-start starting for "multinode-833232" (driver="docker")
	I1109 22:06:15.085939  777892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:06:15.086011  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:06:15.086064  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:15.105604  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:15.209288  777892 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:06:15.213463  777892 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1109 22:06:15.213486  777892 command_runner.go:130] > NAME="Ubuntu"
	I1109 22:06:15.213494  777892 command_runner.go:130] > VERSION_ID="22.04"
	I1109 22:06:15.213501  777892 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1109 22:06:15.213508  777892 command_runner.go:130] > VERSION_CODENAME=jammy
	I1109 22:06:15.213534  777892 command_runner.go:130] > ID=ubuntu
	I1109 22:06:15.213546  777892 command_runner.go:130] > ID_LIKE=debian
	I1109 22:06:15.213552  777892 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 22:06:15.213562  777892 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 22:06:15.213569  777892 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 22:06:15.213578  777892 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 22:06:15.213586  777892 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1109 22:06:15.213640  777892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:06:15.213672  777892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:06:15.213686  777892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:06:15.213695  777892 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 22:06:15.213708  777892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:06:15.213770  777892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:06:15.213861  777892 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:06:15.213874  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /etc/ssl/certs/7135732.pem
	I1109 22:06:15.213979  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:06:15.224291  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:06:15.251806  777892 start.go:303] post-start completed in 165.863774ms
	I1109 22:06:15.252193  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232
	I1109 22:06:15.269147  777892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json ...
	I1109 22:06:15.269414  777892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:06:15.269454  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:15.286450  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:15.384204  777892 command_runner.go:130] > 11%!
	(MISSING)I1109 22:06:15.384282  777892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:06:15.389839  777892 command_runner.go:130] > 174G
	I1109 22:06:15.389872  777892 start.go:128] duration metric: createHost completed in 8.383158544s
	I1109 22:06:15.389883  777892 start.go:83] releasing machines lock for "multinode-833232", held for 8.383282466s
	I1109 22:06:15.389954  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232
	I1109 22:06:15.406787  777892 ssh_runner.go:195] Run: cat /version.json
	I1109 22:06:15.406848  777892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 22:06:15.406895  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:15.406850  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:15.431316  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:15.432373  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:15.658707  777892 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 22:06:15.661758  777892 command_runner.go:130] > {"iso_version": "v1.32.1", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "ac8620e02dd92b447e2556d107d7751e3faf21d2"}
	I1109 22:06:15.661902  777892 ssh_runner.go:195] Run: systemctl --version
	I1109 22:06:15.667247  777892 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1109 22:06:15.667282  777892 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1109 22:06:15.667343  777892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 22:06:15.812716  777892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 22:06:15.818458  777892 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1109 22:06:15.818530  777892 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1109 22:06:15.818553  777892 command_runner.go:130] > Device: 36h/54d	Inode: 1823289     Links: 1
	I1109 22:06:15.818594  777892 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:06:15.818607  777892 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1109 22:06:15.818614  777892 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1109 22:06:15.818621  777892 command_runner.go:130] > Change: 2023-11-09 21:28:21.090111595 +0000
	I1109 22:06:15.818627  777892 command_runner.go:130] >  Birth: 2023-11-09 21:28:21.090111595 +0000
	I1109 22:06:15.818941  777892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:06:15.844307  777892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 22:06:15.844450  777892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:06:15.885442  777892 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1109 22:06:15.885528  777892 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1109 22:06:15.885559  777892 start.go:472] detecting cgroup driver to use...
	I1109 22:06:15.885608  777892 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 22:06:15.885679  777892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 22:06:15.905442  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 22:06:15.918843  777892 docker.go:203] disabling cri-docker service (if available) ...
	I1109 22:06:15.918919  777892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 22:06:15.934175  777892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 22:06:15.951058  777892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 22:06:16.053501  777892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 22:06:16.164724  777892 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1109 22:06:16.164769  777892 docker.go:219] disabling docker service ...
	I1109 22:06:16.164877  777892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 22:06:16.187154  777892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 22:06:16.200653  777892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 22:06:16.292009  777892 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1109 22:06:16.292165  777892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 22:06:16.397189  777892 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1109 22:06:16.397271  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 22:06:16.410466  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 22:06:16.431109  777892 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1109 22:06:16.431161  777892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 22:06:16.431218  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:06:16.443332  777892 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 22:06:16.443427  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:06:16.454846  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:06:16.467007  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:06:16.478651  777892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 22:06:16.489577  777892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 22:06:16.500138  777892 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1109 22:06:16.500249  777892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 22:06:16.510530  777892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 22:06:16.604253  777892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 22:06:16.733101  777892 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 22:06:16.733171  777892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 22:06:16.737612  777892 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1109 22:06:16.737635  777892 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 22:06:16.737645  777892 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I1109 22:06:16.737653  777892 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:06:16.737660  777892 command_runner.go:130] > Access: 2023-11-09 22:06:16.715967645 +0000
	I1109 22:06:16.737667  777892 command_runner.go:130] > Modify: 2023-11-09 22:06:16.715967645 +0000
	I1109 22:06:16.737677  777892 command_runner.go:130] > Change: 2023-11-09 22:06:16.715967645 +0000
	I1109 22:06:16.737682  777892 command_runner.go:130] >  Birth: -
	I1109 22:06:16.738001  777892 start.go:540] Will wait 60s for crictl version
	I1109 22:06:16.738060  777892 ssh_runner.go:195] Run: which crictl
	I1109 22:06:16.742001  777892 command_runner.go:130] > /usr/bin/crictl
	I1109 22:06:16.742398  777892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 22:06:16.781199  777892 command_runner.go:130] > Version:  0.1.0
	I1109 22:06:16.781271  777892 command_runner.go:130] > RuntimeName:  cri-o
	I1109 22:06:16.781293  777892 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1109 22:06:16.781316  777892 command_runner.go:130] > RuntimeApiVersion:  v1
	I1109 22:06:16.783836  777892 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 22:06:16.786226  777892 ssh_runner.go:195] Run: crio --version
	I1109 22:06:16.826236  777892 command_runner.go:130] > crio version 1.24.6
	I1109 22:06:16.826307  777892 command_runner.go:130] > Version:          1.24.6
	I1109 22:06:16.826356  777892 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1109 22:06:16.826390  777892 command_runner.go:130] > GitTreeState:     clean
	I1109 22:06:16.826415  777892 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1109 22:06:16.826437  777892 command_runner.go:130] > GoVersion:        go1.18.2
	I1109 22:06:16.826470  777892 command_runner.go:130] > Compiler:         gc
	I1109 22:06:16.826495  777892 command_runner.go:130] > Platform:         linux/arm64
	I1109 22:06:16.826528  777892 command_runner.go:130] > Linkmode:         dynamic
	I1109 22:06:16.826569  777892 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1109 22:06:16.826593  777892 command_runner.go:130] > SeccompEnabled:   true
	I1109 22:06:16.826610  777892 command_runner.go:130] > AppArmorEnabled:  false
	I1109 22:06:16.828566  777892 ssh_runner.go:195] Run: crio --version
	I1109 22:06:16.873215  777892 command_runner.go:130] > crio version 1.24.6
	I1109 22:06:16.873237  777892 command_runner.go:130] > Version:          1.24.6
	I1109 22:06:16.873246  777892 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1109 22:06:16.873252  777892 command_runner.go:130] > GitTreeState:     clean
	I1109 22:06:16.873264  777892 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1109 22:06:16.873270  777892 command_runner.go:130] > GoVersion:        go1.18.2
	I1109 22:06:16.873277  777892 command_runner.go:130] > Compiler:         gc
	I1109 22:06:16.873282  777892 command_runner.go:130] > Platform:         linux/arm64
	I1109 22:06:16.873291  777892 command_runner.go:130] > Linkmode:         dynamic
	I1109 22:06:16.873303  777892 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1109 22:06:16.873311  777892 command_runner.go:130] > SeccompEnabled:   true
	I1109 22:06:16.873316  777892 command_runner.go:130] > AppArmorEnabled:  false
	I1109 22:06:16.877492  777892 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1109 22:06:16.879489  777892 cli_runner.go:164] Run: docker network inspect multinode-833232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:06:16.897821  777892 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1109 22:06:16.902799  777892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 22:06:16.917699  777892 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 22:06:16.917815  777892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 22:06:16.982863  777892 command_runner.go:130] > {
	I1109 22:06:16.982883  777892 command_runner.go:130] >   "images": [
	I1109 22:06:16.982889  777892 command_runner.go:130] >     {
	I1109 22:06:16.982899  777892 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1109 22:06:16.982905  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.982913  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1109 22:06:16.982918  777892 command_runner.go:130] >       ],
	I1109 22:06:16.982924  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.982939  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1109 22:06:16.982953  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1109 22:06:16.982963  777892 command_runner.go:130] >       ],
	I1109 22:06:16.982969  777892 command_runner.go:130] >       "size": "60867618",
	I1109 22:06:16.982977  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:16.982983  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.982992  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983000  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983007  777892 command_runner.go:130] >     },
	I1109 22:06:16.983011  777892 command_runner.go:130] >     {
	I1109 22:06:16.983019  777892 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1109 22:06:16.983028  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983035  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1109 22:06:16.983042  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983048  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983057  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1109 22:06:16.983067  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1109 22:06:16.983076  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983082  777892 command_runner.go:130] >       "size": "29037500",
	I1109 22:06:16.983087  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:16.983095  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983100  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983105  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983109  777892 command_runner.go:130] >     },
	I1109 22:06:16.983113  777892 command_runner.go:130] >     {
	I1109 22:06:16.983123  777892 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1109 22:06:16.983135  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983142  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1109 22:06:16.983146  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983151  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983163  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1109 22:06:16.983178  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1109 22:06:16.983182  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983190  777892 command_runner.go:130] >       "size": "51393451",
	I1109 22:06:16.983194  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:16.983199  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983207  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983212  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983218  777892 command_runner.go:130] >     },
	I1109 22:06:16.983223  777892 command_runner.go:130] >     {
	I1109 22:06:16.983235  777892 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1109 22:06:16.983243  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983249  777892 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1109 22:06:16.983253  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983261  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983272  777892 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1109 22:06:16.983282  777892 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1109 22:06:16.983293  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983299  777892 command_runner.go:130] >       "size": "182203183",
	I1109 22:06:16.983306  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:16.983311  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:16.983319  777892 command_runner.go:130] >       },
	I1109 22:06:16.983324  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983329  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983343  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983347  777892 command_runner.go:130] >     },
	I1109 22:06:16.983352  777892 command_runner.go:130] >     {
	I1109 22:06:16.983362  777892 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1109 22:06:16.983367  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983373  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1109 22:06:16.983378  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983383  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983402  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1109 22:06:16.983421  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1109 22:06:16.983426  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983433  777892 command_runner.go:130] >       "size": "121054158",
	I1109 22:06:16.983438  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:16.983443  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:16.983448  777892 command_runner.go:130] >       },
	I1109 22:06:16.983453  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983460  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983465  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983469  777892 command_runner.go:130] >     },
	I1109 22:06:16.983476  777892 command_runner.go:130] >     {
	I1109 22:06:16.983484  777892 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1109 22:06:16.983492  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983499  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1109 22:06:16.983504  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983512  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983522  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1109 22:06:16.983536  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1109 22:06:16.983541  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983547  777892 command_runner.go:130] >       "size": "117252916",
	I1109 22:06:16.983552  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:16.983559  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:16.983565  777892 command_runner.go:130] >       },
	I1109 22:06:16.983571  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983578  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983583  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983587  777892 command_runner.go:130] >     },
	I1109 22:06:16.983593  777892 command_runner.go:130] >     {
	I1109 22:06:16.983601  777892 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1109 22:06:16.983609  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983615  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1109 22:06:16.983620  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983625  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983635  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1109 22:06:16.983646  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1109 22:06:16.983652  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983660  777892 command_runner.go:130] >       "size": "69926807",
	I1109 22:06:16.983665  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:16.983670  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983678  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983683  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983687  777892 command_runner.go:130] >     },
	I1109 22:06:16.983693  777892 command_runner.go:130] >     {
	I1109 22:06:16.983701  777892 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1109 22:06:16.983706  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983713  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1109 22:06:16.983719  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983725  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983750  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1109 22:06:16.983763  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1109 22:06:16.983768  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983778  777892 command_runner.go:130] >       "size": "59188020",
	I1109 22:06:16.983783  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:16.983790  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:16.983797  777892 command_runner.go:130] >       },
	I1109 22:06:16.983805  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983810  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983818  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983822  777892 command_runner.go:130] >     },
	I1109 22:06:16.983826  777892 command_runner.go:130] >     {
	I1109 22:06:16.983834  777892 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1109 22:06:16.983842  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:16.983848  777892 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1109 22:06:16.983853  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983860  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:16.983872  777892 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1109 22:06:16.983881  777892 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1109 22:06:16.983888  777892 command_runner.go:130] >       ],
	I1109 22:06:16.983894  777892 command_runner.go:130] >       "size": "520014",
	I1109 22:06:16.983898  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:16.983903  777892 command_runner.go:130] >         "value": "65535"
	I1109 22:06:16.983912  777892 command_runner.go:130] >       },
	I1109 22:06:16.983917  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:16.983926  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:16.983930  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:16.983935  777892 command_runner.go:130] >     }
	I1109 22:06:16.983939  777892 command_runner.go:130] >   ]
	I1109 22:06:16.983944  777892 command_runner.go:130] > }
	I1109 22:06:16.984131  777892 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 22:06:16.984144  777892 crio.go:415] Images already preloaded, skipping extraction
	I1109 22:06:16.984197  777892 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 22:06:17.021959  777892 command_runner.go:130] > {
	I1109 22:06:17.021982  777892 command_runner.go:130] >   "images": [
	I1109 22:06:17.021988  777892 command_runner.go:130] >     {
	I1109 22:06:17.021998  777892 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1109 22:06:17.022004  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022011  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1109 22:06:17.022020  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022028  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022039  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1109 22:06:17.022048  777892 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1109 22:06:17.022057  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022063  777892 command_runner.go:130] >       "size": "60867618",
	I1109 22:06:17.022071  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:17.022077  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022088  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022097  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022102  777892 command_runner.go:130] >     },
	I1109 22:06:17.022107  777892 command_runner.go:130] >     {
	I1109 22:06:17.022117  777892 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1109 22:06:17.022122  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022132  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1109 22:06:17.022140  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022145  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022155  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1109 22:06:17.022166  777892 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1109 22:06:17.022171  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022180  777892 command_runner.go:130] >       "size": "29037500",
	I1109 22:06:17.022190  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:17.022194  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022199  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022204  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022211  777892 command_runner.go:130] >     },
	I1109 22:06:17.022216  777892 command_runner.go:130] >     {
	I1109 22:06:17.022227  777892 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1109 22:06:17.022236  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022243  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1109 22:06:17.022251  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022256  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022269  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1109 22:06:17.022279  777892 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1109 22:06:17.022288  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022294  777892 command_runner.go:130] >       "size": "51393451",
	I1109 22:06:17.022305  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:17.022343  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022353  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022359  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022367  777892 command_runner.go:130] >     },
	I1109 22:06:17.022372  777892 command_runner.go:130] >     {
	I1109 22:06:17.022380  777892 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1109 22:06:17.022388  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022398  777892 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1109 22:06:17.022407  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022412  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022424  777892 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1109 22:06:17.022436  777892 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1109 22:06:17.022453  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022462  777892 command_runner.go:130] >       "size": "182203183",
	I1109 22:06:17.022467  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:17.022472  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:17.022477  777892 command_runner.go:130] >       },
	I1109 22:06:17.022485  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022494  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022501  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022509  777892 command_runner.go:130] >     },
	I1109 22:06:17.022514  777892 command_runner.go:130] >     {
	I1109 22:06:17.022526  777892 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1109 22:06:17.022534  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022540  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1109 22:06:17.022545  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022553  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022562  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1109 22:06:17.022575  777892 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1109 22:06:17.022583  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022589  777892 command_runner.go:130] >       "size": "121054158",
	I1109 22:06:17.022597  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:17.022602  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:17.022610  777892 command_runner.go:130] >       },
	I1109 22:06:17.022615  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022626  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022631  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022636  777892 command_runner.go:130] >     },
	I1109 22:06:17.022640  777892 command_runner.go:130] >     {
	I1109 22:06:17.022650  777892 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1109 22:06:17.022658  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022665  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1109 22:06:17.022673  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022678  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022691  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1109 22:06:17.022704  777892 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1109 22:06:17.022711  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022717  777892 command_runner.go:130] >       "size": "117252916",
	I1109 22:06:17.022721  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:17.022728  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:17.022736  777892 command_runner.go:130] >       },
	I1109 22:06:17.022742  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022750  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022757  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022765  777892 command_runner.go:130] >     },
	I1109 22:06:17.022770  777892 command_runner.go:130] >     {
	I1109 22:06:17.022781  777892 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1109 22:06:17.022790  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022796  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1109 22:06:17.022801  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022806  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022817  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1109 22:06:17.022831  777892 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1109 22:06:17.022839  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022845  777892 command_runner.go:130] >       "size": "69926807",
	I1109 22:06:17.022853  777892 command_runner.go:130] >       "uid": null,
	I1109 22:06:17.022858  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.022866  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.022872  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.022878  777892 command_runner.go:130] >     },
	I1109 22:06:17.022882  777892 command_runner.go:130] >     {
	I1109 22:06:17.022892  777892 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1109 22:06:17.022901  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.022908  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1109 22:06:17.022915  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022921  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.022964  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1109 22:06:17.022980  777892 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1109 22:06:17.022985  777892 command_runner.go:130] >       ],
	I1109 22:06:17.022992  777892 command_runner.go:130] >       "size": "59188020",
	I1109 22:06:17.023001  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:17.023006  777892 command_runner.go:130] >         "value": "0"
	I1109 22:06:17.023014  777892 command_runner.go:130] >       },
	I1109 22:06:17.023019  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.023028  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.023033  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.023040  777892 command_runner.go:130] >     },
	I1109 22:06:17.023044  777892 command_runner.go:130] >     {
	I1109 22:06:17.023052  777892 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1109 22:06:17.023062  777892 command_runner.go:130] >       "repoTags": [
	I1109 22:06:17.023068  777892 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1109 22:06:17.023076  777892 command_runner.go:130] >       ],
	I1109 22:06:17.023081  777892 command_runner.go:130] >       "repoDigests": [
	I1109 22:06:17.023093  777892 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1109 22:06:17.023106  777892 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1109 22:06:17.023114  777892 command_runner.go:130] >       ],
	I1109 22:06:17.023119  777892 command_runner.go:130] >       "size": "520014",
	I1109 22:06:17.023123  777892 command_runner.go:130] >       "uid": {
	I1109 22:06:17.023129  777892 command_runner.go:130] >         "value": "65535"
	I1109 22:06:17.023136  777892 command_runner.go:130] >       },
	I1109 22:06:17.023141  777892 command_runner.go:130] >       "username": "",
	I1109 22:06:17.023149  777892 command_runner.go:130] >       "spec": null,
	I1109 22:06:17.023155  777892 command_runner.go:130] >       "pinned": false
	I1109 22:06:17.023162  777892 command_runner.go:130] >     }
	I1109 22:06:17.023167  777892 command_runner.go:130] >   ]
	I1109 22:06:17.023174  777892 command_runner.go:130] > }
	I1109 22:06:17.025660  777892 crio.go:496] all images are preloaded for cri-o runtime.
	I1109 22:06:17.025682  777892 cache_images.go:84] Images are preloaded, skipping loading
	I1109 22:06:17.025765  777892 ssh_runner.go:195] Run: crio config
	I1109 22:06:17.074366  777892 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1109 22:06:17.074391  777892 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1109 22:06:17.074399  777892 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1109 22:06:17.074403  777892 command_runner.go:130] > #
	I1109 22:06:17.074424  777892 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1109 22:06:17.074433  777892 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1109 22:06:17.074443  777892 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1109 22:06:17.074456  777892 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1109 22:06:17.074461  777892 command_runner.go:130] > # reload'.
	I1109 22:06:17.074468  777892 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1109 22:06:17.074476  777892 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1109 22:06:17.074483  777892 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1109 22:06:17.074490  777892 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1109 22:06:17.074495  777892 command_runner.go:130] > [crio]
	I1109 22:06:17.074502  777892 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1109 22:06:17.074508  777892 command_runner.go:130] > # containers images, in this directory.
	I1109 22:06:17.074517  777892 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1109 22:06:17.074525  777892 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1109 22:06:17.074811  777892 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1109 22:06:17.074833  777892 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1109 22:06:17.074841  777892 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1109 22:06:17.074847  777892 command_runner.go:130] > # storage_driver = "vfs"
	I1109 22:06:17.074854  777892 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1109 22:06:17.074861  777892 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1109 22:06:17.074866  777892 command_runner.go:130] > # storage_option = [
	I1109 22:06:17.075170  777892 command_runner.go:130] > # ]
	I1109 22:06:17.075183  777892 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1109 22:06:17.075194  777892 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1109 22:06:17.075199  777892 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1109 22:06:17.075207  777892 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1109 22:06:17.075214  777892 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1109 22:06:17.075257  777892 command_runner.go:130] > # always happen on a node reboot
	I1109 22:06:17.075268  777892 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1109 22:06:17.075275  777892 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1109 22:06:17.075282  777892 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1109 22:06:17.075322  777892 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1109 22:06:17.075333  777892 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1109 22:06:17.075342  777892 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1109 22:06:17.075352  777892 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1109 22:06:17.075357  777892 command_runner.go:130] > # internal_wipe = true
	I1109 22:06:17.075364  777892 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1109 22:06:17.075371  777892 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1109 22:06:17.075403  777892 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1109 22:06:17.075413  777892 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1109 22:06:17.075422  777892 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1109 22:06:17.075427  777892 command_runner.go:130] > [crio.api]
	I1109 22:06:17.075434  777892 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1109 22:06:17.075439  777892 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1109 22:06:17.075446  777892 command_runner.go:130] > # IP address on which the stream server will listen.
	I1109 22:06:17.075486  777892 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1109 22:06:17.075499  777892 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1109 22:06:17.075505  777892 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1109 22:06:17.075509  777892 command_runner.go:130] > # stream_port = "0"
	I1109 22:06:17.075516  777892 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1109 22:06:17.075521  777892 command_runner.go:130] > # stream_enable_tls = false
	I1109 22:06:17.075553  777892 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1109 22:06:17.075563  777892 command_runner.go:130] > # stream_idle_timeout = ""
	I1109 22:06:17.075571  777892 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1109 22:06:17.075580  777892 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1109 22:06:17.075585  777892 command_runner.go:130] > # minutes.
	I1109 22:06:17.075590  777892 command_runner.go:130] > # stream_tls_cert = ""
	I1109 22:06:17.075597  777892 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1109 22:06:17.075604  777892 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1109 22:06:17.075637  777892 command_runner.go:130] > # stream_tls_key = ""
	I1109 22:06:17.075645  777892 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1109 22:06:17.075652  777892 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1109 22:06:17.075659  777892 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1109 22:06:17.075666  777892 command_runner.go:130] > # stream_tls_ca = ""
	I1109 22:06:17.075675  777892 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1109 22:06:17.075680  777892 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1109 22:06:17.075714  777892 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1109 22:06:17.075724  777892 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1109 22:06:17.075794  777892 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1109 22:06:17.075804  777892 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1109 22:06:17.075810  777892 command_runner.go:130] > [crio.runtime]
	I1109 22:06:17.075817  777892 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1109 22:06:17.075823  777892 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1109 22:06:17.075828  777892 command_runner.go:130] > # "nofile=1024:2048"
	I1109 22:06:17.075836  777892 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1109 22:06:17.075865  777892 command_runner.go:130] > # default_ulimits = [
	I1109 22:06:17.075873  777892 command_runner.go:130] > # ]
	I1109 22:06:17.075880  777892 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1109 22:06:17.075887  777892 command_runner.go:130] > # no_pivot = false
	I1109 22:06:17.075895  777892 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1109 22:06:17.075902  777892 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1109 22:06:17.075911  777892 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1109 22:06:17.075944  777892 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1109 22:06:17.075954  777892 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1109 22:06:17.075980  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1109 22:06:17.075984  777892 command_runner.go:130] > # conmon = ""
	I1109 22:06:17.075989  777892 command_runner.go:130] > # Cgroup setting for conmon
	I1109 22:06:17.076022  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1109 22:06:17.076030  777892 command_runner.go:130] > conmon_cgroup = "pod"
	I1109 22:06:17.076037  777892 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1109 22:06:17.076044  777892 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1109 22:06:17.076052  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1109 22:06:17.076056  777892 command_runner.go:130] > # conmon_env = [
	I1109 22:06:17.076060  777892 command_runner.go:130] > # ]
	I1109 22:06:17.076067  777892 command_runner.go:130] > # Additional environment variables to set for all the
	I1109 22:06:17.076098  777892 command_runner.go:130] > # containers. These are overridden if set in the
	I1109 22:06:17.076108  777892 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1109 22:06:17.076113  777892 command_runner.go:130] > # default_env = [
	I1109 22:06:17.076117  777892 command_runner.go:130] > # ]
	I1109 22:06:17.076127  777892 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1109 22:06:17.076131  777892 command_runner.go:130] > # selinux = false
	I1109 22:06:17.076139  777892 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1109 22:06:17.076147  777892 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1109 22:06:17.076179  777892 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1109 22:06:17.076187  777892 command_runner.go:130] > # seccomp_profile = ""
	I1109 22:06:17.076194  777892 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1109 22:06:17.076202  777892 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1109 22:06:17.076209  777892 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1109 22:06:17.076215  777892 command_runner.go:130] > # which might increase security.
	I1109 22:06:17.076220  777892 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1109 22:06:17.076253  777892 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1109 22:06:17.076264  777892 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1109 22:06:17.076272  777892 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1109 22:06:17.076279  777892 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1109 22:06:17.076285  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:06:17.076291  777892 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1109 22:06:17.076300  777892 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1109 22:06:17.076335  777892 command_runner.go:130] > # the cgroup blockio controller.
	I1109 22:06:17.076344  777892 command_runner.go:130] > # blockio_config_file = ""
	I1109 22:06:17.076352  777892 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1109 22:06:17.076357  777892 command_runner.go:130] > # irqbalance daemon.
	I1109 22:06:17.076363  777892 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1109 22:06:17.076371  777892 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1109 22:06:17.076377  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:06:17.076410  777892 command_runner.go:130] > # rdt_config_file = ""
	I1109 22:06:17.076420  777892 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1109 22:06:17.076426  777892 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1109 22:06:17.076433  777892 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1109 22:06:17.076439  777892 command_runner.go:130] > # separate_pull_cgroup = ""
	I1109 22:06:17.076446  777892 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1109 22:06:17.076454  777892 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1109 22:06:17.076491  777892 command_runner.go:130] > # will be added.
	I1109 22:06:17.076501  777892 command_runner.go:130] > # default_capabilities = [
	I1109 22:06:17.076505  777892 command_runner.go:130] > # 	"CHOWN",
	I1109 22:06:17.076510  777892 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1109 22:06:17.076517  777892 command_runner.go:130] > # 	"FSETID",
	I1109 22:06:17.076522  777892 command_runner.go:130] > # 	"FOWNER",
	I1109 22:06:17.076526  777892 command_runner.go:130] > # 	"SETGID",
	I1109 22:06:17.076531  777892 command_runner.go:130] > # 	"SETUID",
	I1109 22:06:17.076561  777892 command_runner.go:130] > # 	"SETPCAP",
	I1109 22:06:17.076570  777892 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1109 22:06:17.076575  777892 command_runner.go:130] > # 	"KILL",
	I1109 22:06:17.076579  777892 command_runner.go:130] > # ]
	I1109 22:06:17.076588  777892 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1109 22:06:17.076596  777892 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1109 22:06:17.076602  777892 command_runner.go:130] > # add_inheritable_capabilities = true
	I1109 22:06:17.076609  777892 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1109 22:06:17.076640  777892 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1109 22:06:17.076647  777892 command_runner.go:130] > # default_sysctls = [
	I1109 22:06:17.076652  777892 command_runner.go:130] > # ]
	I1109 22:06:17.076660  777892 command_runner.go:130] > # List of devices on the host that a
	I1109 22:06:17.076667  777892 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1109 22:06:17.076672  777892 command_runner.go:130] > # allowed_devices = [
	I1109 22:06:17.076680  777892 command_runner.go:130] > # 	"/dev/fuse",
	I1109 22:06:17.076685  777892 command_runner.go:130] > # ]
	I1109 22:06:17.076717  777892 command_runner.go:130] > # List of additional devices. specified as
	I1109 22:06:17.076799  777892 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1109 22:06:17.076826  777892 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1109 22:06:17.076886  777892 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1109 22:06:17.076973  777892 command_runner.go:130] > # additional_devices = [
	I1109 22:06:17.076999  777892 command_runner.go:130] > # ]
	I1109 22:06:17.077020  777892 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1109 22:06:17.077053  777892 command_runner.go:130] > # cdi_spec_dirs = [
	I1109 22:06:17.077077  777892 command_runner.go:130] > # 	"/etc/cdi",
	I1109 22:06:17.077097  777892 command_runner.go:130] > # 	"/var/run/cdi",
	I1109 22:06:17.077130  777892 command_runner.go:130] > # ]
	I1109 22:06:17.077155  777892 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1109 22:06:17.077175  777892 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1109 22:06:17.077208  777892 command_runner.go:130] > # Defaults to false.
	I1109 22:06:17.077233  777892 command_runner.go:130] > # device_ownership_from_security_context = false
	I1109 22:06:17.077254  777892 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1109 22:06:17.077291  777892 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1109 22:06:17.077312  777892 command_runner.go:130] > # hooks_dir = [
	I1109 22:06:17.077330  777892 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1109 22:06:17.077362  777892 command_runner.go:130] > # ]
	I1109 22:06:17.077387  777892 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1109 22:06:17.077407  777892 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1109 22:06:17.077441  777892 command_runner.go:130] > # its default mounts from the following two files:
	I1109 22:06:17.077464  777892 command_runner.go:130] > #
	I1109 22:06:17.077487  777892 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1109 22:06:17.077522  777892 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1109 22:06:17.077546  777892 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1109 22:06:17.077564  777892 command_runner.go:130] > #
	I1109 22:06:17.077599  777892 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1109 22:06:17.077625  777892 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1109 22:06:17.077644  777892 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1109 22:06:17.077679  777892 command_runner.go:130] > #      only add mounts it finds in this file.
	I1109 22:06:17.077703  777892 command_runner.go:130] > #
	I1109 22:06:17.077723  777892 command_runner.go:130] > # default_mounts_file = ""
	I1109 22:06:17.077771  777892 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1109 22:06:17.077805  777892 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1109 22:06:17.077847  777892 command_runner.go:130] > # pids_limit = 0
	I1109 22:06:17.077878  777892 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1109 22:06:17.077900  777892 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1109 22:06:17.077931  777892 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1109 22:06:17.077958  777892 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1109 22:06:17.077980  777892 command_runner.go:130] > # log_size_max = -1
	I1109 22:06:17.078013  777892 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1109 22:06:17.078491  777892 command_runner.go:130] > # log_to_journald = false
	I1109 22:06:17.078509  777892 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1109 22:06:17.078516  777892 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1109 22:06:17.078523  777892 command_runner.go:130] > # Path to directory for container attach sockets.
	I1109 22:06:17.078534  777892 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1109 22:06:17.078542  777892 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1109 22:06:17.078547  777892 command_runner.go:130] > # bind_mount_prefix = ""
	I1109 22:06:17.078554  777892 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1109 22:06:17.078559  777892 command_runner.go:130] > # read_only = false
	I1109 22:06:17.078568  777892 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1109 22:06:17.078579  777892 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1109 22:06:17.078589  777892 command_runner.go:130] > # live configuration reload.
	I1109 22:06:17.078594  777892 command_runner.go:130] > # log_level = "info"
	I1109 22:06:17.078601  777892 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1109 22:06:17.078607  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:06:17.078612  777892 command_runner.go:130] > # log_filter = ""
	I1109 22:06:17.078619  777892 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1109 22:06:17.078632  777892 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1109 22:06:17.078637  777892 command_runner.go:130] > # separated by comma.
	I1109 22:06:17.078646  777892 command_runner.go:130] > # uid_mappings = ""
	I1109 22:06:17.078654  777892 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1109 22:06:17.078667  777892 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1109 22:06:17.078672  777892 command_runner.go:130] > # separated by comma.
	I1109 22:06:17.078681  777892 command_runner.go:130] > # gid_mappings = ""
	I1109 22:06:17.078689  777892 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1109 22:06:17.078696  777892 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1109 22:06:17.078706  777892 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1109 22:06:17.078714  777892 command_runner.go:130] > # minimum_mappable_uid = -1
	I1109 22:06:17.078726  777892 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1109 22:06:17.078734  777892 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1109 22:06:17.078745  777892 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1109 22:06:17.078751  777892 command_runner.go:130] > # minimum_mappable_gid = -1
	I1109 22:06:17.078762  777892 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1109 22:06:17.078770  777892 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1109 22:06:17.078777  777892 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1109 22:06:17.078782  777892 command_runner.go:130] > # ctr_stop_timeout = 30
	I1109 22:06:17.078789  777892 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1109 22:06:17.078797  777892 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1109 22:06:17.078804  777892 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1109 22:06:17.078820  777892 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1109 22:06:17.078825  777892 command_runner.go:130] > # drop_infra_ctr = true
	I1109 22:06:17.078833  777892 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1109 22:06:17.078842  777892 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1109 22:06:17.078851  777892 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1109 22:06:17.078858  777892 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1109 22:06:17.078867  777892 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1109 22:06:17.078873  777892 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1109 22:06:17.078878  777892 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1109 22:06:17.078889  777892 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1109 22:06:17.078902  777892 command_runner.go:130] > # pinns_path = ""
	I1109 22:06:17.078910  777892 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1109 22:06:17.078918  777892 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1109 22:06:17.078928  777892 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1109 22:06:17.078934  777892 command_runner.go:130] > # default_runtime = "runc"
	I1109 22:06:17.078943  777892 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1109 22:06:17.078951  777892 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1109 22:06:17.078963  777892 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1109 22:06:17.078971  777892 command_runner.go:130] > # creation as a file is not desired either.
	I1109 22:06:17.078981  777892 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1109 22:06:17.078990  777892 command_runner.go:130] > # the hostname is being managed dynamically.
	I1109 22:06:17.078996  777892 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1109 22:06:17.079000  777892 command_runner.go:130] > # ]
	I1109 22:06:17.079009  777892 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1109 22:06:17.079021  777892 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1109 22:06:17.079035  777892 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1109 22:06:17.079043  777892 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1109 22:06:17.079049  777892 command_runner.go:130] > #
	I1109 22:06:17.079055  777892 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1109 22:06:17.079064  777892 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1109 22:06:17.079072  777892 command_runner.go:130] > #  runtime_type = "oci"
	I1109 22:06:17.079078  777892 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1109 22:06:17.079086  777892 command_runner.go:130] > #  privileged_without_host_devices = false
	I1109 22:06:17.079091  777892 command_runner.go:130] > #  allowed_annotations = []
	I1109 22:06:17.079096  777892 command_runner.go:130] > # Where:
	I1109 22:06:17.079105  777892 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1109 22:06:17.079135  777892 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1109 22:06:17.079144  777892 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1109 22:06:17.079151  777892 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1109 22:06:17.079156  777892 command_runner.go:130] > #   in $PATH.
	I1109 22:06:17.079163  777892 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1109 22:06:17.079169  777892 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1109 22:06:17.079178  777892 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1109 22:06:17.079183  777892 command_runner.go:130] > #   state.
	I1109 22:06:17.079190  777892 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1109 22:06:17.079197  777892 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1109 22:06:17.079205  777892 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1109 22:06:17.079211  777892 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1109 22:06:17.079218  777892 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1109 22:06:17.079226  777892 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1109 22:06:17.079232  777892 command_runner.go:130] > #   The currently recognized values are:
	I1109 22:06:17.079240  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1109 22:06:17.079253  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1109 22:06:17.079261  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1109 22:06:17.079272  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1109 22:06:17.079281  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1109 22:06:17.079289  777892 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1109 22:06:17.079296  777892 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1109 22:06:17.079304  777892 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1109 22:06:17.079310  777892 command_runner.go:130] > #   should be moved to the container's cgroup
	I1109 22:06:17.079326  777892 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1109 22:06:17.079332  777892 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1109 22:06:17.079337  777892 command_runner.go:130] > runtime_type = "oci"
	I1109 22:06:17.079347  777892 command_runner.go:130] > runtime_root = "/run/runc"
	I1109 22:06:17.079352  777892 command_runner.go:130] > runtime_config_path = ""
	I1109 22:06:17.079357  777892 command_runner.go:130] > monitor_path = ""
	I1109 22:06:17.079369  777892 command_runner.go:130] > monitor_cgroup = ""
	I1109 22:06:17.079374  777892 command_runner.go:130] > monitor_exec_cgroup = ""
	I1109 22:06:17.079414  777892 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1109 22:06:17.079424  777892 command_runner.go:130] > # running containers
	I1109 22:06:17.079429  777892 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1109 22:06:17.079436  777892 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1109 22:06:17.079446  777892 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1109 22:06:17.079456  777892 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1109 22:06:17.079463  777892 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1109 22:06:17.079468  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1109 22:06:17.079474  777892 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1109 22:06:17.079479  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1109 22:06:17.079486  777892 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1109 22:06:17.079494  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1109 22:06:17.079501  777892 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1109 22:06:17.079510  777892 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1109 22:06:17.079520  777892 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1109 22:06:17.079531  777892 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1109 22:06:17.079541  777892 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1109 22:06:17.079550  777892 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1109 22:06:17.079560  777892 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1109 22:06:17.079570  777892 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1109 22:06:17.079577  777892 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1109 22:06:17.079588  777892 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1109 22:06:17.079592  777892 command_runner.go:130] > # Example:
	I1109 22:06:17.079599  777892 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1109 22:06:17.079607  777892 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1109 22:06:17.079616  777892 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1109 22:06:17.079624  777892 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1109 22:06:17.079629  777892 command_runner.go:130] > # cpuset = 0
	I1109 22:06:17.079636  777892 command_runner.go:130] > # cpushares = "0-1"
	I1109 22:06:17.079640  777892 command_runner.go:130] > # Where:
	I1109 22:06:17.079646  777892 command_runner.go:130] > # The workload name is workload-type.
	I1109 22:06:17.079657  777892 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1109 22:06:17.079666  777892 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1109 22:06:17.079673  777892 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1109 22:06:17.079682  777892 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1109 22:06:17.079713  777892 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1109 22:06:17.079720  777892 command_runner.go:130] > # 
	I1109 22:06:17.079728  777892 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1109 22:06:17.079731  777892 command_runner.go:130] > #
	I1109 22:06:17.079740  777892 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1109 22:06:17.079747  777892 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1109 22:06:17.079755  777892 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1109 22:06:17.079762  777892 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1109 22:06:17.079769  777892 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1109 22:06:17.079773  777892 command_runner.go:130] > [crio.image]
	I1109 22:06:17.079781  777892 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1109 22:06:17.079790  777892 command_runner.go:130] > # default_transport = "docker://"
	I1109 22:06:17.079801  777892 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1109 22:06:17.079809  777892 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1109 22:06:17.079814  777892 command_runner.go:130] > # global_auth_file = ""
	I1109 22:06:17.079820  777892 command_runner.go:130] > # The image used to instantiate infra containers.
	I1109 22:06:17.079826  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:06:17.079832  777892 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1109 22:06:17.079840  777892 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1109 22:06:17.079850  777892 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1109 22:06:17.079857  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:06:17.079866  777892 command_runner.go:130] > # pause_image_auth_file = ""
	I1109 22:06:17.079873  777892 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1109 22:06:17.079886  777892 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1109 22:06:17.079894  777892 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1109 22:06:17.079901  777892 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1109 22:06:17.079906  777892 command_runner.go:130] > # pause_command = "/pause"
	I1109 22:06:17.079913  777892 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1109 22:06:17.079926  777892 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1109 22:06:17.079936  777892 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1109 22:06:17.079946  777892 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1109 22:06:17.079953  777892 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1109 22:06:17.079958  777892 command_runner.go:130] > # signature_policy = ""
	I1109 22:06:17.079968  777892 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1109 22:06:17.079975  777892 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1109 22:06:17.079980  777892 command_runner.go:130] > # changing them here.
	I1109 22:06:17.079985  777892 command_runner.go:130] > # insecure_registries = [
	I1109 22:06:17.079989  777892 command_runner.go:130] > # ]
	I1109 22:06:17.079997  777892 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1109 22:06:17.080010  777892 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1109 22:06:17.080017  777892 command_runner.go:130] > # image_volumes = "mkdir"
	I1109 22:06:17.080023  777892 command_runner.go:130] > # Temporary directory to use for storing big files
	I1109 22:06:17.080031  777892 command_runner.go:130] > # big_files_temporary_dir = ""
	I1109 22:06:17.080039  777892 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1109 22:06:17.080045  777892 command_runner.go:130] > # CNI plugins.
	I1109 22:06:17.080050  777892 command_runner.go:130] > [crio.network]
	I1109 22:06:17.080057  777892 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1109 22:06:17.080065  777892 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1109 22:06:17.080072  777892 command_runner.go:130] > # cni_default_network = ""
	I1109 22:06:17.080079  777892 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1109 22:06:17.080084  777892 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1109 22:06:17.080091  777892 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1109 22:06:17.080098  777892 command_runner.go:130] > # plugin_dirs = [
	I1109 22:06:17.080103  777892 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1109 22:06:17.080107  777892 command_runner.go:130] > # ]
	I1109 22:06:17.080118  777892 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1109 22:06:17.080123  777892 command_runner.go:130] > [crio.metrics]
	I1109 22:06:17.080132  777892 command_runner.go:130] > # Globally enable or disable metrics support.
	I1109 22:06:17.080137  777892 command_runner.go:130] > # enable_metrics = false
	I1109 22:06:17.080143  777892 command_runner.go:130] > # Specify enabled metrics collectors.
	I1109 22:06:17.080148  777892 command_runner.go:130] > # Per default all metrics are enabled.
	I1109 22:06:17.080156  777892 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1109 22:06:17.080166  777892 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1109 22:06:17.080173  777892 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1109 22:06:17.080180  777892 command_runner.go:130] > # metrics_collectors = [
	I1109 22:06:17.080187  777892 command_runner.go:130] > # 	"operations",
	I1109 22:06:17.080196  777892 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1109 22:06:17.080202  777892 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1109 22:06:17.080209  777892 command_runner.go:130] > # 	"operations_errors",
	I1109 22:06:17.080214  777892 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1109 22:06:17.080219  777892 command_runner.go:130] > # 	"image_pulls_by_name",
	I1109 22:06:17.080224  777892 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1109 22:06:17.080229  777892 command_runner.go:130] > # 	"image_pulls_failures",
	I1109 22:06:17.080234  777892 command_runner.go:130] > # 	"image_pulls_successes",
	I1109 22:06:17.080242  777892 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1109 22:06:17.080248  777892 command_runner.go:130] > # 	"image_layer_reuse",
	I1109 22:06:17.080255  777892 command_runner.go:130] > # 	"containers_oom_total",
	I1109 22:06:17.080260  777892 command_runner.go:130] > # 	"containers_oom",
	I1109 22:06:17.080264  777892 command_runner.go:130] > # 	"processes_defunct",
	I1109 22:06:17.080271  777892 command_runner.go:130] > # 	"operations_total",
	I1109 22:06:17.080277  777892 command_runner.go:130] > # 	"operations_latency_seconds",
	I1109 22:06:17.080285  777892 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1109 22:06:17.080290  777892 command_runner.go:130] > # 	"operations_errors_total",
	I1109 22:06:17.080296  777892 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1109 22:06:17.080302  777892 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1109 22:06:17.080309  777892 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1109 22:06:17.080314  777892 command_runner.go:130] > # 	"image_pulls_success_total",
	I1109 22:06:17.080322  777892 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1109 22:06:17.080327  777892 command_runner.go:130] > # 	"containers_oom_count_total",
	I1109 22:06:17.080332  777892 command_runner.go:130] > # ]
	I1109 22:06:17.080338  777892 command_runner.go:130] > # The port on which the metrics server will listen.
	I1109 22:06:17.080345  777892 command_runner.go:130] > # metrics_port = 9090
	I1109 22:06:17.080370  777892 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1109 22:06:17.080378  777892 command_runner.go:130] > # metrics_socket = ""
	I1109 22:06:17.080384  777892 command_runner.go:130] > # The certificate for the secure metrics server.
	I1109 22:06:17.080392  777892 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1109 22:06:17.080399  777892 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1109 22:06:17.080404  777892 command_runner.go:130] > # certificate on any modification event.
	I1109 22:06:17.080409  777892 command_runner.go:130] > # metrics_cert = ""
	I1109 22:06:17.080415  777892 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1109 22:06:17.080421  777892 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1109 22:06:17.080428  777892 command_runner.go:130] > # metrics_key = ""
	I1109 22:06:17.080435  777892 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1109 22:06:17.080440  777892 command_runner.go:130] > [crio.tracing]
	I1109 22:06:17.080447  777892 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1109 22:06:17.080451  777892 command_runner.go:130] > # enable_tracing = false
	I1109 22:06:17.080458  777892 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1109 22:06:17.080480  777892 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1109 22:06:17.080487  777892 command_runner.go:130] > # Number of samples to collect per million spans.
	I1109 22:06:17.080493  777892 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1109 22:06:17.080500  777892 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1109 22:06:17.080505  777892 command_runner.go:130] > [crio.stats]
	I1109 22:06:17.080512  777892 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1109 22:06:17.080520  777892 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1109 22:06:17.080526  777892 command_runner.go:130] > # stats_collection_period = 0
	I1109 22:06:17.082365  777892 command_runner.go:130] ! time="2023-11-09 22:06:17.071947643Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1109 22:06:17.082393  777892 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1109 22:06:17.082512  777892 cni.go:84] Creating CNI manager for ""
	I1109 22:06:17.082524  777892 cni.go:136] 1 nodes found, recommending kindnet
	I1109 22:06:17.082554  777892 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 22:06:17.082574  777892 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-833232 NodeName:multinode-833232 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 22:06:17.082746  777892 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-833232"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 22:06:17.082823  777892 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-833232 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 22:06:17.082889  777892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1109 22:06:17.092504  777892 command_runner.go:130] > kubeadm
	I1109 22:06:17.092525  777892 command_runner.go:130] > kubectl
	I1109 22:06:17.092530  777892 command_runner.go:130] > kubelet
	I1109 22:06:17.093673  777892 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 22:06:17.093748  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 22:06:17.104798  777892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1109 22:06:17.125727  777892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 22:06:17.147095  777892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1109 22:06:17.168175  777892 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 22:06:17.172629  777892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 22:06:17.186078  777892 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232 for IP: 192.168.58.2
	I1109 22:06:17.186154  777892 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:17.186294  777892 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 22:06:17.186398  777892 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 22:06:17.186447  777892 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key
	I1109 22:06:17.186463  777892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt with IP's: []
	I1109 22:06:17.507128  777892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt ...
	I1109 22:06:17.507159  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt: {Name:mk32dd6c9d9a6f9f6c719c50e1c2015bbf922efe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:17.507351  777892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key ...
	I1109 22:06:17.507365  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key: {Name:mk904474e205c66829b8ec591b41d3bf36f7026c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:17.507460  777892 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key.cee25041
	I1109 22:06:17.507476  777892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 22:06:17.723243  777892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt.cee25041 ...
	I1109 22:06:17.723272  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt.cee25041: {Name:mk0e39c286efc9c546bf69aa73649cfcca52a846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:17.723447  777892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key.cee25041 ...
	I1109 22:06:17.723462  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key.cee25041: {Name:mkd12b9ad924c3736b022098e0214d19ef4ebb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:17.723544  777892 certs.go:337] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt
	I1109 22:06:17.723631  777892 certs.go:341] copying /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key
	I1109 22:06:17.723701  777892 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.key
	I1109 22:06:17.723716  777892 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.crt with IP's: []
	I1109 22:06:18.745982  777892 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.crt ...
	I1109 22:06:18.746013  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.crt: {Name:mkd5b056cfbded4924102b07ea53c9fde5d417fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:18.746198  777892 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.key ...
	I1109 22:06:18.746213  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.key: {Name:mkaa6f96f495620da9e787e02ab77f9c78050927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:18.746291  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 22:06:18.746366  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 22:06:18.746382  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 22:06:18.746395  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 22:06:18.746410  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 22:06:18.746425  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 22:06:18.746440  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 22:06:18.746453  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 22:06:18.746513  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem (1338 bytes)
	W1109 22:06:18.746551  777892 certs.go:433] ignoring /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573_empty.pem, impossibly tiny 0 bytes
	I1109 22:06:18.746566  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 22:06:18.746592  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 22:06:18.746624  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 22:06:18.746655  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 22:06:18.746707  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:06:18.746739  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:06:18.746756  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem -> /usr/share/ca-certificates/713573.pem
	I1109 22:06:18.746773  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /usr/share/ca-certificates/7135732.pem
	I1109 22:06:18.747376  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 22:06:18.776511  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 22:06:18.804842  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 22:06:18.832789  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 22:06:18.860250  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 22:06:18.887500  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 22:06:18.915707  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 22:06:18.943200  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 22:06:18.971298  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 22:06:18.999568  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem --> /usr/share/ca-certificates/713573.pem (1338 bytes)
	I1109 22:06:19.029097  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /usr/share/ca-certificates/7135732.pem (1708 bytes)
	I1109 22:06:19.057687  777892 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 22:06:19.078536  777892 ssh_runner.go:195] Run: openssl version
	I1109 22:06:19.085238  777892 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1109 22:06:19.085669  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 22:06:19.097395  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:06:19.101836  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:06:19.101881  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:06:19.101946  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:06:19.110141  777892 command_runner.go:130] > b5213941
	I1109 22:06:19.110584  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 22:06:19.125428  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/713573.pem && ln -fs /usr/share/ca-certificates/713573.pem /etc/ssl/certs/713573.pem"
	I1109 22:06:19.137042  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/713573.pem
	I1109 22:06:19.141658  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 22:06:19.141958  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 22:06:19.142043  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/713573.pem
	I1109 22:06:19.150270  777892 command_runner.go:130] > 51391683
	I1109 22:06:19.150673  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/713573.pem /etc/ssl/certs/51391683.0"
	I1109 22:06:19.161960  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135732.pem && ln -fs /usr/share/ca-certificates/7135732.pem /etc/ssl/certs/7135732.pem"
	I1109 22:06:19.173244  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135732.pem
	I1109 22:06:19.177753  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 22:06:19.178076  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 22:06:19.178166  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135732.pem
	I1109 22:06:19.186353  777892 command_runner.go:130] > 3ec20f2e
	I1109 22:06:19.186428  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7135732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 22:06:19.197702  777892 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 22:06:19.202007  777892 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 22:06:19.202047  777892 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 22:06:19.202089  777892 kubeadm.go:404] StartCluster: {Name:multinode-833232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 22:06:19.202169  777892 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 22:06:19.202239  777892 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 22:06:19.250778  777892 cri.go:89] found id: ""
	I1109 22:06:19.250889  777892 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 22:06:19.260201  777892 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1109 22:06:19.260272  777892 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1109 22:06:19.260295  777892 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1109 22:06:19.261372  777892 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 22:06:19.271695  777892 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1109 22:06:19.271762  777892 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 22:06:19.281733  777892 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1109 22:06:19.281794  777892 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1109 22:06:19.281809  777892 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1109 22:06:19.281821  777892 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 22:06:19.281847  777892 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 22:06:19.281879  777892 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 22:06:19.333472  777892 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1109 22:06:19.333501  777892 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1109 22:06:19.333688  777892 kubeadm.go:322] [preflight] Running pre-flight checks
	I1109 22:06:19.333705  777892 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 22:06:19.378500  777892 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1109 22:06:19.378528  777892 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1109 22:06:19.378580  777892 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1109 22:06:19.378588  777892 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1109 22:06:19.378627  777892 kubeadm.go:322] OS: Linux
	I1109 22:06:19.378637  777892 command_runner.go:130] > OS: Linux
	I1109 22:06:19.378685  777892 kubeadm.go:322] CGROUPS_CPU: enabled
	I1109 22:06:19.378695  777892 command_runner.go:130] > CGROUPS_CPU: enabled
	I1109 22:06:19.378739  777892 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1109 22:06:19.378747  777892 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1109 22:06:19.378790  777892 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1109 22:06:19.378799  777892 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1109 22:06:19.378844  777892 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1109 22:06:19.378853  777892 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1109 22:06:19.378898  777892 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1109 22:06:19.378907  777892 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1109 22:06:19.378953  777892 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1109 22:06:19.378964  777892 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1109 22:06:19.379006  777892 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1109 22:06:19.379014  777892 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1109 22:06:19.379058  777892 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1109 22:06:19.379066  777892 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1109 22:06:19.379109  777892 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1109 22:06:19.379117  777892 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1109 22:06:19.461983  777892 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 22:06:19.462022  777892 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 22:06:19.462130  777892 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 22:06:19.462142  777892 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 22:06:19.462249  777892 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 22:06:19.462259  777892 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 22:06:19.708992  777892 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 22:06:19.714643  777892 out.go:204]   - Generating certificates and keys ...
	I1109 22:06:19.709409  777892 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 22:06:19.714846  777892 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1109 22:06:19.714871  777892 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1109 22:06:19.714962  777892 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1109 22:06:19.714996  777892 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1109 22:06:20.058605  777892 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 22:06:20.058629  777892 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 22:06:20.550843  777892 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1109 22:06:20.550871  777892 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1109 22:06:20.935084  777892 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1109 22:06:20.935152  777892 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1109 22:06:21.297031  777892 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1109 22:06:21.297055  777892 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1109 22:06:21.771725  777892 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1109 22:06:21.771764  777892 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1109 22:06:21.772111  777892 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-833232] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1109 22:06:21.772130  777892 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-833232] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1109 22:06:22.239739  777892 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1109 22:06:22.239799  777892 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1109 22:06:22.240141  777892 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-833232] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1109 22:06:22.240155  777892 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-833232] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1109 22:06:22.444399  777892 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 22:06:22.444423  777892 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 22:06:22.764454  777892 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 22:06:22.764486  777892 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 22:06:23.975637  777892 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1109 22:06:23.975670  777892 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1109 22:06:23.975991  777892 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 22:06:23.976015  777892 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 22:06:24.281453  777892 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 22:06:24.281482  777892 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 22:06:24.399566  777892 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 22:06:24.399604  777892 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 22:06:24.772319  777892 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 22:06:24.772349  777892 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 22:06:25.062937  777892 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 22:06:25.062973  777892 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 22:06:25.063570  777892 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 22:06:25.063590  777892 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 22:06:25.066320  777892 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 22:06:25.068862  777892 out.go:204]   - Booting up control plane ...
	I1109 22:06:25.066414  777892 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 22:06:25.068973  777892 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 22:06:25.068988  777892 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 22:06:25.069102  777892 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 22:06:25.069114  777892 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 22:06:25.069616  777892 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 22:06:25.069638  777892 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 22:06:25.080531  777892 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 22:06:25.080560  777892 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 22:06:25.083476  777892 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 22:06:25.083506  777892 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 22:06:25.083545  777892 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1109 22:06:25.083557  777892 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1109 22:06:25.190004  777892 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 22:06:25.190029  777892 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 22:06:32.193934  777892 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003110 seconds
	I1109 22:06:32.193969  777892 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.003110 seconds
	I1109 22:06:32.194100  777892 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 22:06:32.194115  777892 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 22:06:32.209263  777892 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 22:06:32.209291  777892 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 22:06:32.736935  777892 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 22:06:32.736959  777892 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1109 22:06:32.737131  777892 kubeadm.go:322] [mark-control-plane] Marking the node multinode-833232 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 22:06:32.737137  777892 command_runner.go:130] > [mark-control-plane] Marking the node multinode-833232 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 22:06:33.253212  777892 kubeadm.go:322] [bootstrap-token] Using token: wwbf4z.iusip1v5rl3enorw
	I1109 22:06:33.255578  777892 out.go:204]   - Configuring RBAC rules ...
	I1109 22:06:33.253322  777892 command_runner.go:130] > [bootstrap-token] Using token: wwbf4z.iusip1v5rl3enorw
	I1109 22:06:33.255697  777892 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 22:06:33.255707  777892 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 22:06:33.260721  777892 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 22:06:33.260747  777892 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 22:06:33.268541  777892 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 22:06:33.268572  777892 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 22:06:33.272454  777892 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 22:06:33.272476  777892 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 22:06:33.277697  777892 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 22:06:33.277722  777892 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 22:06:33.282698  777892 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 22:06:33.282722  777892 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 22:06:33.295839  777892 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 22:06:33.295861  777892 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 22:06:33.557841  777892 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1109 22:06:33.557868  777892 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1109 22:06:33.724733  777892 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1109 22:06:33.724760  777892 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1109 22:06:33.724766  777892 kubeadm.go:322] 
	I1109 22:06:33.724823  777892 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1109 22:06:33.724832  777892 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1109 22:06:33.724837  777892 kubeadm.go:322] 
	I1109 22:06:33.724910  777892 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1109 22:06:33.724921  777892 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1109 22:06:33.724927  777892 kubeadm.go:322] 
	I1109 22:06:33.724951  777892 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1109 22:06:33.724960  777892 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1109 22:06:33.725016  777892 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 22:06:33.725024  777892 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 22:06:33.725072  777892 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 22:06:33.725081  777892 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 22:06:33.725086  777892 kubeadm.go:322] 
	I1109 22:06:33.725143  777892 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1109 22:06:33.725152  777892 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1109 22:06:33.725157  777892 kubeadm.go:322] 
	I1109 22:06:33.725202  777892 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 22:06:33.725210  777892 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 22:06:33.725215  777892 kubeadm.go:322] 
	I1109 22:06:33.725264  777892 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1109 22:06:33.725273  777892 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1109 22:06:33.725343  777892 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 22:06:33.725351  777892 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 22:06:33.725415  777892 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 22:06:33.725424  777892 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 22:06:33.725428  777892 kubeadm.go:322] 
	I1109 22:06:33.725508  777892 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 22:06:33.725517  777892 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1109 22:06:33.725590  777892 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1109 22:06:33.725598  777892 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1109 22:06:33.725603  777892 kubeadm.go:322] 
	I1109 22:06:33.725682  777892 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token wwbf4z.iusip1v5rl3enorw \
	I1109 22:06:33.725690  777892 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token wwbf4z.iusip1v5rl3enorw \
	I1109 22:06:33.725787  777892 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 \
	I1109 22:06:33.725796  777892 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 \
	I1109 22:06:33.725816  777892 kubeadm.go:322] 	--control-plane 
	I1109 22:06:33.725829  777892 command_runner.go:130] > 	--control-plane 
	I1109 22:06:33.725834  777892 kubeadm.go:322] 
	I1109 22:06:33.725914  777892 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1109 22:06:33.725919  777892 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1109 22:06:33.725923  777892 kubeadm.go:322] 
	I1109 22:06:33.726000  777892 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token wwbf4z.iusip1v5rl3enorw \
	I1109 22:06:33.726005  777892 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wwbf4z.iusip1v5rl3enorw \
	I1109 22:06:33.726101  777892 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 22:06:33.726106  777892 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 22:06:33.729121  777892 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 22:06:33.729143  777892 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 22:06:33.729243  777892 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 22:06:33.729253  777892 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 22:06:33.729265  777892 cni.go:84] Creating CNI manager for ""
	I1109 22:06:33.729271  777892 cni.go:136] 1 nodes found, recommending kindnet
	I1109 22:06:33.733242  777892 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 22:06:33.735718  777892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 22:06:33.749604  777892 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1109 22:06:33.749627  777892 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1109 22:06:33.749636  777892 command_runner.go:130] > Device: 36h/54d	Inode: 1827011     Links: 1
	I1109 22:06:33.749644  777892 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:06:33.749651  777892 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1109 22:06:33.749657  777892 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1109 22:06:33.749664  777892 command_runner.go:130] > Change: 2023-11-09 21:28:21.758106581 +0000
	I1109 22:06:33.749675  777892 command_runner.go:130] >  Birth: 2023-11-09 21:28:21.718106882 +0000
	I1109 22:06:33.750223  777892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1109 22:06:33.750239  777892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 22:06:33.813685  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 22:06:34.652041  777892 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1109 22:06:34.666624  777892 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1109 22:06:34.678965  777892 command_runner.go:130] > serviceaccount/kindnet created
	I1109 22:06:34.695247  777892 command_runner.go:130] > daemonset.apps/kindnet created
	I1109 22:06:34.700381  777892 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 22:06:34.700587  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:34.700728  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b minikube.k8s.io/name=multinode-833232 minikube.k8s.io/updated_at=2023_11_09T22_06_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:34.882871  777892 command_runner.go:130] > node/multinode-833232 labeled
	I1109 22:06:34.886392  777892 command_runner.go:130] > -16
	I1109 22:06:34.886462  777892 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1109 22:06:34.886497  777892 ops.go:34] apiserver oom_adj: -16
	I1109 22:06:34.886595  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:34.977761  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:34.977902  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:35.069076  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:35.569872  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:35.657505  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:36.070182  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:36.161571  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:36.569714  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:36.659756  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:37.069780  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:37.174126  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:37.569834  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:37.667596  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:38.070151  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:38.161893  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:38.569309  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:38.655798  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:39.069807  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:39.161833  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:39.569304  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:39.661517  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:40.070128  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:40.163766  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:40.569425  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:40.663675  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:41.069334  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:41.160590  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:41.570172  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:41.661714  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:42.069439  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:42.167800  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:42.569467  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:42.657990  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:43.069316  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:43.161977  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:43.569329  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:43.670736  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:44.069336  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:44.160519  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:44.570168  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:44.663330  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:45.069501  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:45.172956  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:45.569528  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:45.670161  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:46.069340  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:46.161224  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:46.569329  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:46.680229  777892 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1109 22:06:47.070021  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 22:06:47.162240  777892 command_runner.go:130] > NAME      SECRETS   AGE
	I1109 22:06:47.162260  777892 command_runner.go:130] > default   0         1s
	I1109 22:06:47.165574  777892 kubeadm.go:1081] duration metric: took 12.465089057s to wait for elevateKubeSystemPrivileges.
	I1109 22:06:47.165599  777892 kubeadm.go:406] StartCluster complete in 27.963514256s
	I1109 22:06:47.165615  777892 settings.go:142] acquiring lock: {Name:mk717b4baf2280543b738622644195ea0d60d476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:47.165675  777892 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:06:47.166410  777892 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17565-708188/kubeconfig: {Name:mk5701fd19491b0b49f183ef877286e38ea5f8d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:06:47.166924  777892 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:06:47.167194  777892 kapi.go:59] client config for multinode-833232: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 22:06:47.168321  777892 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 22:06:47.168339  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.168350  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.168358  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.168551  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 22:06:47.168750  777892 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:06:47.168866  777892 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1109 22:06:47.168926  777892 addons.go:69] Setting storage-provisioner=true in profile "multinode-833232"
	I1109 22:06:47.168939  777892 addons.go:231] Setting addon storage-provisioner=true in "multinode-833232"
	I1109 22:06:47.168994  777892 host.go:66] Checking if "multinode-833232" exists ...
	I1109 22:06:47.169463  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:47.169980  777892 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 22:06:47.170015  777892 addons.go:69] Setting default-storageclass=true in profile "multinode-833232"
	I1109 22:06:47.170027  777892 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-833232"
	I1109 22:06:47.170298  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:47.198876  777892 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:06:47.199199  777892 kapi.go:59] client config for multinode-833232: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 22:06:47.199462  777892 addons.go:231] Setting addon default-storageclass=true in "multinode-833232"
	I1109 22:06:47.199489  777892 host.go:66] Checking if "multinode-833232" exists ...
	I1109 22:06:47.199901  777892 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:06:47.203572  777892 round_trippers.go:574] Response Status: 200 OK in 35 milliseconds
	I1109 22:06:47.203589  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.203597  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.203604  777892 round_trippers.go:580]     Audit-Id: c0f48bae-f61e-4288-a781-eb5427f8d1a7
	I1109 22:06:47.203610  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.203615  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.203622  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.203628  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.203634  777892 round_trippers.go:580]     Content-Length: 291
	I1109 22:06:47.203657  777892 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cd0c9666-0fbb-4844-a49b-1e39c4363b86","resourceVersion":"387","creationTimestamp":"2023-11-09T22:06:33Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1109 22:06:47.204058  777892 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cd0c9666-0fbb-4844-a49b-1e39c4363b86","resourceVersion":"387","creationTimestamp":"2023-11-09T22:06:33Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1109 22:06:47.204119  777892 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 22:06:47.204126  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.204134  777892 round_trippers.go:473]     Content-Type: application/json
	I1109 22:06:47.204141  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.204147  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.217881  777892 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1109 22:06:47.217904  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.217913  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.217920  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.217926  777892 round_trippers.go:580]     Content-Length: 291
	I1109 22:06:47.217932  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.217938  777892 round_trippers.go:580]     Audit-Id: cafa5e94-6706-4074-ae15-2b2c8a36b033
	I1109 22:06:47.217944  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.217950  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.217972  777892 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cd0c9666-0fbb-4844-a49b-1e39c4363b86","resourceVersion":"388","creationTimestamp":"2023-11-09T22:06:33Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1109 22:06:47.218111  777892 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 22:06:47.218119  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.218126  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.218133  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.230351  777892 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 22:06:47.232341  777892 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 22:06:47.232364  777892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 22:06:47.232430  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:47.230531  777892 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1109 22:06:47.232659  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.232673  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.232680  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.232686  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.232692  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.232698  777892 round_trippers.go:580]     Content-Length: 291
	I1109 22:06:47.232706  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.232712  777892 round_trippers.go:580]     Audit-Id: 9c494102-013f-47b3-99da-187748bee5f6
	I1109 22:06:47.232736  777892 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cd0c9666-0fbb-4844-a49b-1e39c4363b86","resourceVersion":"388","creationTimestamp":"2023-11-09T22:06:33Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1109 22:06:47.232826  777892 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-833232" context rescaled to 1 replicas
	I1109 22:06:47.232851  777892 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 22:06:47.238422  777892 out.go:177] * Verifying Kubernetes components...
	I1109 22:06:47.241062  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:06:47.262617  777892 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 22:06:47.262637  777892 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 22:06:47.262716  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:06:47.281938  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:47.309468  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:06:47.421791  777892 command_runner.go:130] > apiVersion: v1
	I1109 22:06:47.421857  777892 command_runner.go:130] > data:
	I1109 22:06:47.421888  777892 command_runner.go:130] >   Corefile: |
	I1109 22:06:47.421931  777892 command_runner.go:130] >     .:53 {
	I1109 22:06:47.421955  777892 command_runner.go:130] >         errors
	I1109 22:06:47.421981  777892 command_runner.go:130] >         health {
	I1109 22:06:47.422007  777892 command_runner.go:130] >            lameduck 5s
	I1109 22:06:47.422035  777892 command_runner.go:130] >         }
	I1109 22:06:47.422061  777892 command_runner.go:130] >         ready
	I1109 22:06:47.422084  777892 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1109 22:06:47.422118  777892 command_runner.go:130] >            pods insecure
	I1109 22:06:47.422157  777892 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1109 22:06:47.422200  777892 command_runner.go:130] >            ttl 30
	I1109 22:06:47.422260  777892 command_runner.go:130] >         }
	I1109 22:06:47.422295  777892 command_runner.go:130] >         prometheus :9153
	I1109 22:06:47.422349  777892 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1109 22:06:47.422374  777892 command_runner.go:130] >            max_concurrent 1000
	I1109 22:06:47.422392  777892 command_runner.go:130] >         }
	I1109 22:06:47.422421  777892 command_runner.go:130] >         cache 30
	I1109 22:06:47.422450  777892 command_runner.go:130] >         loop
	I1109 22:06:47.422473  777892 command_runner.go:130] >         reload
	I1109 22:06:47.422493  777892 command_runner.go:130] >         loadbalance
	I1109 22:06:47.422512  777892 command_runner.go:130] >     }
	I1109 22:06:47.422533  777892 command_runner.go:130] > kind: ConfigMap
	I1109 22:06:47.422562  777892 command_runner.go:130] > metadata:
	I1109 22:06:47.422599  777892 command_runner.go:130] >   creationTimestamp: "2023-11-09T22:06:33Z"
	I1109 22:06:47.422618  777892 command_runner.go:130] >   name: coredns
	I1109 22:06:47.422635  777892 command_runner.go:130] >   namespace: kube-system
	I1109 22:06:47.422664  777892 command_runner.go:130] >   resourceVersion: "255"
	I1109 22:06:47.422687  777892 command_runner.go:130] >   uid: eca041ae-bb6b-4cc6-a7b3-da86c3810b36
	I1109 22:06:47.423532  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 22:06:47.423977  777892 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:06:47.424390  777892 kapi.go:59] client config for multinode-833232: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 22:06:47.424729  777892 node_ready.go:35] waiting up to 6m0s for node "multinode-833232" to be "Ready" ...
	I1109 22:06:47.425182  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:47.425214  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.425237  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.425258  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.428578  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:06:47.428598  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.428606  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.428613  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.428620  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.428626  777892 round_trippers.go:580]     Audit-Id: 7b3b63cb-e81b-4a8d-8991-f98620b3d0d5
	I1109 22:06:47.428632  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.428639  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.429247  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:47.429972  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:47.429982  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.429990  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.429997  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.432220  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:47.432234  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.432241  777892 round_trippers.go:580]     Audit-Id: cb41f945-19ac-41ad-9095-f627301268bc
	I1109 22:06:47.432248  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.432254  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.432260  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.432266  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.432275  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.432761  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:47.454776  777892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 22:06:47.485103  777892 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 22:06:47.933368  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:47.933436  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:47.933460  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:47.933482  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:47.937255  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:06:47.937326  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:47.937349  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:47.937372  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:47.937408  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:47.937433  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:47 GMT
	I1109 22:06:47.937455  777892 round_trippers.go:580]     Audit-Id: 504b8f00-2624-4dc3-aa36-b8b6f4f12075
	I1109 22:06:47.937491  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:47.940807  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:48.151884  777892 command_runner.go:130] > configmap/coredns replaced
	I1109 22:06:48.157350  777892 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1109 22:06:48.256095  777892 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1109 22:06:48.264824  777892 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1109 22:06:48.274194  777892 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1109 22:06:48.287918  777892 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1109 22:06:48.299328  777892 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1109 22:06:48.310915  777892 command_runner.go:130] > pod/storage-provisioner created
	I1109 22:06:48.316510  777892 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1109 22:06:48.316741  777892 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1109 22:06:48.316769  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:48.316792  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:48.316825  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:48.324157  777892 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1109 22:06:48.324182  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:48.324191  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:48.324198  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:48.324211  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:48.324219  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:48.324241  777892 round_trippers.go:580]     Content-Length: 1273
	I1109 22:06:48.324260  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:48 GMT
	I1109 22:06:48.324267  777892 round_trippers.go:580]     Audit-Id: a0822381-dfcf-4aa4-993b-fd87614027fa
	I1109 22:06:48.324933  777892 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"414"},"items":[{"metadata":{"name":"standard","uid":"11042501-c727-4aeb-8934-f36638e35496","resourceVersion":"405","creationTimestamp":"2023-11-09T22:06:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-09T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1109 22:06:48.325383  777892 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"11042501-c727-4aeb-8934-f36638e35496","resourceVersion":"405","creationTimestamp":"2023-11-09T22:06:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-09T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 22:06:48.325451  777892 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1109 22:06:48.325465  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:48.325473  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:48.325486  777892 round_trippers.go:473]     Content-Type: application/json
	I1109 22:06:48.325492  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:48.330006  777892 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 22:06:48.330068  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:48.330091  777892 round_trippers.go:580]     Content-Length: 1220
	I1109 22:06:48.330133  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:48 GMT
	I1109 22:06:48.330157  777892 round_trippers.go:580]     Audit-Id: d9185932-9dac-4ed1-aa03-ad3799f31d15
	I1109 22:06:48.330177  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:48.330198  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:48.330235  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:48.330255  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:48.330383  777892 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"11042501-c727-4aeb-8934-f36638e35496","resourceVersion":"405","creationTimestamp":"2023-11-09T22:06:48Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-09T22:06:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 22:06:48.332878  777892 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 22:06:48.334644  777892 addons.go:502] enable addons completed in 1.165773117s: enabled=[storage-provisioner default-storageclass]
	I1109 22:06:48.434263  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:48.434286  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:48.434297  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:48.434304  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:48.437523  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:06:48.437543  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:48.437552  777892 round_trippers.go:580]     Audit-Id: 7b9d5328-3a7f-47f1-9bb6-4d14f1071cee
	I1109 22:06:48.437558  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:48.437564  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:48.437571  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:48.437580  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:48.437587  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:48 GMT
	I1109 22:06:48.438041  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:48.934124  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:48.934145  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:48.934155  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:48.934162  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:48.936829  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:48.936890  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:48.936912  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:48.936935  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:48.936969  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:48.936991  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:48.937012  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:48 GMT
	I1109 22:06:48.937034  777892 round_trippers.go:580]     Audit-Id: 9697f2f5-0acb-461b-8029-4a3b1e17f780
	I1109 22:06:48.937165  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:49.433372  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:49.433404  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:49.433415  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:49.433423  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:49.436071  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:49.436136  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:49.436160  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:49.436182  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:49.436219  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:49.436250  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:49.436274  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:49 GMT
	I1109 22:06:49.436295  777892 round_trippers.go:580]     Audit-Id: e0f2762a-f533-4433-b311-208dbce8adb1
	I1109 22:06:49.436428  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:49.436860  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:06:49.933962  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:49.934014  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:49.934024  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:49.934031  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:49.936530  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:49.936558  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:49.936566  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:49.936573  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:49.936597  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:49.936612  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:49.936618  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:49 GMT
	I1109 22:06:49.936626  777892 round_trippers.go:580]     Audit-Id: e7a7bf48-a6a8-49ba-8f14-443f742d2ccc
	I1109 22:06:49.937026  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:50.433637  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:50.433663  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:50.433673  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:50.433681  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:50.436361  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:50.436393  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:50.436403  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:50.436410  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:50.436417  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:50 GMT
	I1109 22:06:50.436425  777892 round_trippers.go:580]     Audit-Id: 5031184c-31ce-4811-8c1d-2191462bd632
	I1109 22:06:50.436434  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:50.436440  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:50.436644  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:50.933939  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:50.933964  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:50.933974  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:50.933981  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:50.936549  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:50.936620  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:50.936636  777892 round_trippers.go:580]     Audit-Id: d55da684-71fa-41b1-89c9-5033097542b3
	I1109 22:06:50.936644  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:50.936650  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:50.936657  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:50.936663  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:50.936670  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:50 GMT
	I1109 22:06:50.936770  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:51.434033  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:51.434060  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:51.434071  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:51.434078  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:51.436547  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:51.436571  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:51.436580  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:51.436587  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:51.436593  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:51 GMT
	I1109 22:06:51.436618  777892 round_trippers.go:580]     Audit-Id: 6c33ce9c-fc34-4251-921b-f7f726605aa7
	I1109 22:06:51.436629  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:51.436635  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:51.437014  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:51.437491  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:06:51.933404  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:51.933433  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:51.933445  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:51.933452  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:51.936076  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:51.936099  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:51.936108  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:51.936115  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:51 GMT
	I1109 22:06:51.936121  777892 round_trippers.go:580]     Audit-Id: 16b4ec9e-a80b-4f0e-9207-5a7df942b2ec
	I1109 22:06:51.936128  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:51.936138  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:51.936144  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:51.936243  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:52.433349  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:52.433376  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:52.433386  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:52.433393  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:52.436069  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:52.436158  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:52.436174  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:52.436182  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:52 GMT
	I1109 22:06:52.436189  777892 round_trippers.go:580]     Audit-Id: b743d6d8-7650-42f4-9290-62340e725c22
	I1109 22:06:52.436195  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:52.436233  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:52.436246  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:52.436369  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:52.933600  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:52.933641  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:52.933651  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:52.933659  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:52.936320  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:52.936344  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:52.936386  777892 round_trippers.go:580]     Audit-Id: b88504d2-15ef-46a1-9462-d503039fabff
	I1109 22:06:52.936397  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:52.936404  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:52.936416  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:52.936422  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:52.936439  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:52 GMT
	I1109 22:06:52.936542  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:53.434118  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:53.434142  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:53.434151  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:53.434159  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:53.436822  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:53.436880  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:53.436891  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:53.436903  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:53.436910  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:53 GMT
	I1109 22:06:53.436923  777892 round_trippers.go:580]     Audit-Id: 7108f99f-5265-4aec-a16a-10faed65094e
	I1109 22:06:53.436933  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:53.436940  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:53.437096  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:53.933527  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:53.933552  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:53.933563  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:53.933570  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:53.936092  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:53.936118  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:53.936127  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:53 GMT
	I1109 22:06:53.936134  777892 round_trippers.go:580]     Audit-Id: 2ca3bba6-19fa-47b1-86a9-57e3d498ee2d
	I1109 22:06:53.936141  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:53.936148  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:53.936154  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:53.936160  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:53.936260  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:53.936676  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:06:54.433337  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:54.433357  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:54.433367  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:54.433375  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:54.437727  777892 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 22:06:54.437751  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:54.437760  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:54.437766  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:54.437772  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:54.437778  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:54 GMT
	I1109 22:06:54.437784  777892 round_trippers.go:580]     Audit-Id: 9e0400a1-2ee4-4f90-b9c9-0ebcb825290b
	I1109 22:06:54.437791  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:54.438154  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:54.934275  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:54.934300  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:54.934310  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:54.934333  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:54.936755  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:54.936778  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:54.936786  777892 round_trippers.go:580]     Audit-Id: 4b80eaf8-9ea9-46ec-b7e9-08f8cf4caf6b
	I1109 22:06:54.936793  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:54.936799  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:54.936805  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:54.936811  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:54.936818  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:54 GMT
	I1109 22:06:54.936929  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:55.433375  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:55.433403  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:55.433415  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:55.433422  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:55.436038  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:55.436062  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:55.436071  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:55.436077  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:55.436083  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:55 GMT
	I1109 22:06:55.436090  777892 round_trippers.go:580]     Audit-Id: 287da51e-aec3-4dad-a566-b4fd97997c3b
	I1109 22:06:55.436096  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:55.436103  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:55.436273  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:55.934021  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:55.934046  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:55.934057  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:55.934065  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:55.936694  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:55.936720  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:55.936730  777892 round_trippers.go:580]     Audit-Id: f90abeef-4ca6-40a2-97c7-25642470b1e4
	I1109 22:06:55.936736  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:55.936744  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:55.936750  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:55.936756  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:55.936766  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:55 GMT
	I1109 22:06:55.936989  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:55.937414  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:06:56.434165  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:56.434186  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:56.434197  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:56.434204  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:56.436710  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:56.436734  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:56.436742  777892 round_trippers.go:580]     Audit-Id: a173389d-459b-423e-b304-aa129066f489
	I1109 22:06:56.436748  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:56.436754  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:56.436760  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:56.436767  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:56.436773  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:56 GMT
	I1109 22:06:56.437086  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:56.933975  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:56.934000  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:56.934013  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:56.934021  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:56.936551  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:56.936579  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:56.936587  777892 round_trippers.go:580]     Audit-Id: fb4e2f95-20c2-4818-a19c-c92e297af3fa
	I1109 22:06:56.936594  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:56.936600  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:56.936606  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:56.936613  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:56.936619  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:56 GMT
	I1109 22:06:56.936723  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:57.433966  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:57.433993  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:57.434004  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:57.434011  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:57.436547  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:57.436568  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:57.436584  777892 round_trippers.go:580]     Audit-Id: b850a787-3bfb-4c44-bedb-b4d5f2db0c01
	I1109 22:06:57.436591  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:57.436598  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:57.436605  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:57.436611  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:57.436618  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:57 GMT
	I1109 22:06:57.436755  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:57.933379  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:57.933402  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:57.933412  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:57.933421  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:57.935928  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:57.935948  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:57.935956  777892 round_trippers.go:580]     Audit-Id: 4352f0f9-4382-4a59-bc69-64c0575f3e07
	I1109 22:06:57.935963  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:57.935969  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:57.935975  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:57.935981  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:57.935988  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:57 GMT
	I1109 22:06:57.936091  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:58.434299  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:58.434384  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:58.434395  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:58.434403  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:58.437089  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:58.437112  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:58.437121  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:58.437128  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:58 GMT
	I1109 22:06:58.437134  777892 round_trippers.go:580]     Audit-Id: ad0a8b7b-28f7-43d2-aaa1-10d9f8838cad
	I1109 22:06:58.437140  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:58.437148  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:58.437155  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:58.437289  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:58.437689  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:06:58.933435  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:58.933460  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:58.933470  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:58.933478  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:58.936013  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:58.936036  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:58.936045  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:58 GMT
	I1109 22:06:58.936051  777892 round_trippers.go:580]     Audit-Id: 1ce76f6e-4820-4412-beb7-670778f439ae
	I1109 22:06:58.936059  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:58.936066  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:58.936073  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:58.936094  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:58.936218  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:59.433308  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:59.433331  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:59.433340  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:59.433348  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:59.435895  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:59.435922  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:59.435931  777892 round_trippers.go:580]     Audit-Id: 77cc2bdf-81e7-468b-a72a-e315c7fd74b4
	I1109 22:06:59.435937  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:59.435943  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:59.435949  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:59.435956  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:59.435963  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:59 GMT
	I1109 22:06:59.436082  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:06:59.934254  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:06:59.934278  777892 round_trippers.go:469] Request Headers:
	I1109 22:06:59.934293  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:06:59.934301  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:06:59.936706  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:06:59.936727  777892 round_trippers.go:577] Response Headers:
	I1109 22:06:59.936735  777892 round_trippers.go:580]     Audit-Id: c55a3073-1cc2-461c-af81-11841990a713
	I1109 22:06:59.936742  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:06:59.936748  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:06:59.936755  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:06:59.936761  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:06:59.936767  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:06:59 GMT
	I1109 22:06:59.936921  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:00.434065  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:00.434094  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:00.434111  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:00.434119  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:00.436951  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:00.436989  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:00.436999  777892 round_trippers.go:580]     Audit-Id: 7b40bdc1-0478-4355-8953-eb8eda2a7a49
	I1109 22:07:00.437010  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:00.437017  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:00.437023  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:00.437033  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:00.437048  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:00 GMT
	I1109 22:07:00.437461  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:00.437882  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:00.934174  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:00.934204  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:00.934217  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:00.934225  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:00.937127  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:00.937154  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:00.937164  777892 round_trippers.go:580]     Audit-Id: 18a326c5-0971-4608-a222-07837bc3b0a2
	I1109 22:07:00.937170  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:00.937178  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:00.937186  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:00.937203  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:00.937210  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:00 GMT
	I1109 22:07:00.937472  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:01.434176  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:01.434207  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:01.434217  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:01.434225  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:01.436822  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:01.436845  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:01.436854  777892 round_trippers.go:580]     Audit-Id: 094a7bd5-a65b-4a12-a9a4-bd2796a12ea8
	I1109 22:07:01.436860  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:01.436867  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:01.436873  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:01.436879  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:01.436890  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:01 GMT
	I1109 22:07:01.437046  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:01.934336  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:01.934358  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:01.934367  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:01.934375  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:01.937069  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:01.937094  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:01.937103  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:01.937109  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:01.937116  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:01.937122  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:01.937129  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:01 GMT
	I1109 22:07:01.937136  777892 round_trippers.go:580]     Audit-Id: 49fd33b7-fa1f-481c-ae11-bec619784305
	I1109 22:07:01.937479  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:02.434133  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:02.434157  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:02.434167  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:02.434175  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:02.436650  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:02.436669  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:02.436677  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:02 GMT
	I1109 22:07:02.436684  777892 round_trippers.go:580]     Audit-Id: e3118bcf-2e03-4517-85bc-ab4fec09fa4d
	I1109 22:07:02.436697  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:02.436704  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:02.436710  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:02.436716  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:02.436874  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:02.933377  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:02.933401  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:02.933411  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:02.933419  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:02.935914  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:02.935936  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:02.935944  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:02 GMT
	I1109 22:07:02.935950  777892 round_trippers.go:580]     Audit-Id: a04d90a3-5088-4567-b570-8bd0e7b519a5
	I1109 22:07:02.935957  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:02.935963  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:02.935971  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:02.935983  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:02.936096  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:02.936490  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:03.434257  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:03.434283  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:03.434293  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:03.434301  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:03.436815  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:03.436840  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:03.436849  777892 round_trippers.go:580]     Audit-Id: a7790923-3d1a-47ba-8a3e-2f0ba013fdb7
	I1109 22:07:03.436856  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:03.436862  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:03.436868  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:03.436875  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:03.436884  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:03 GMT
	I1109 22:07:03.437041  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:03.933851  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:03.933875  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:03.933885  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:03.933893  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:03.936435  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:03.936460  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:03.936469  777892 round_trippers.go:580]     Audit-Id: 9f412dfc-9c1a-454a-aa26-829601463254
	I1109 22:07:03.936475  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:03.936481  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:03.936488  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:03.936501  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:03.936508  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:03 GMT
	I1109 22:07:03.936694  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:04.433738  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:04.433764  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:04.433774  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:04.433782  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:04.436334  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:04.436353  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:04.436361  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:04.436368  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:04.436375  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:04.436381  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:04.436387  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:04 GMT
	I1109 22:07:04.436393  777892 round_trippers.go:580]     Audit-Id: 84c41b88-0423-417a-b5be-5c3b9c896d67
	I1109 22:07:04.436553  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:04.933810  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:04.933834  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:04.933844  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:04.933852  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:04.936314  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:04.936334  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:04.936343  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:04.936350  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:04 GMT
	I1109 22:07:04.936356  777892 round_trippers.go:580]     Audit-Id: 0bd78b59-9a2e-4cbf-8aad-9cebec7b20a4
	I1109 22:07:04.936362  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:04.936368  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:04.936374  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:04.936480  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:04.936876  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:05.433405  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:05.433427  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:05.433437  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:05.433444  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:05.435946  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:05.435968  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:05.435976  777892 round_trippers.go:580]     Audit-Id: abadccc2-e04e-44cc-a12d-e03cf4c2eb76
	I1109 22:07:05.435983  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:05.435989  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:05.435995  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:05.436001  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:05.436007  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:05 GMT
	I1109 22:07:05.436123  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:05.933902  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:05.933925  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:05.933935  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:05.933942  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:05.936553  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:05.936573  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:05.936589  777892 round_trippers.go:580]     Audit-Id: 14ade84f-5b71-49dd-96b7-f38e8811e07c
	I1109 22:07:05.936597  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:05.936604  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:05.936610  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:05.936616  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:05.936623  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:05 GMT
	I1109 22:07:05.936723  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:06.433398  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:06.433421  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:06.433433  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:06.433440  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:06.436056  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:06.436084  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:06.436094  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:06 GMT
	I1109 22:07:06.436100  777892 round_trippers.go:580]     Audit-Id: aca2f1b0-89a6-46f0-a088-586dc71e99e6
	I1109 22:07:06.436107  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:06.436114  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:06.436120  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:06.436130  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:06.436427  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:06.933323  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:06.933343  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:06.933353  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:06.933361  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:06.935930  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:06.935951  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:06.935968  777892 round_trippers.go:580]     Audit-Id: e21e5fa3-9756-4036-b3d7-0c1853f7633d
	I1109 22:07:06.935975  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:06.935981  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:06.935987  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:06.935997  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:06.936004  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:06 GMT
	I1109 22:07:06.936131  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:07.433355  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:07.433378  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:07.433388  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:07.433395  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:07.436175  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:07.436196  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:07.436205  777892 round_trippers.go:580]     Audit-Id: 9adc58c2-f66d-4773-93de-e3d0c1cd7013
	I1109 22:07:07.436211  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:07.436218  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:07.436225  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:07.436235  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:07.436244  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:07 GMT
	I1109 22:07:07.436687  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:07.437137  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:07.933462  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:07.933495  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:07.933506  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:07.933514  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:07.935946  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:07.935967  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:07.935975  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:07.935982  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:07 GMT
	I1109 22:07:07.935988  777892 round_trippers.go:580]     Audit-Id: 967ddec6-c889-40e3-86a1-5c8acb7b8641
	I1109 22:07:07.935994  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:07.936000  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:07.936007  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:07.936214  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:08.434291  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:08.434336  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:08.434347  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:08.434355  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:08.436806  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:08.436829  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:08.436838  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:08.436845  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:08.436851  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:08 GMT
	I1109 22:07:08.436857  777892 round_trippers.go:580]     Audit-Id: 4416e537-5a34-4277-8a9f-8db044010402
	I1109 22:07:08.436867  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:08.436879  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:08.437001  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:08.934076  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:08.934102  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:08.934113  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:08.934120  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:08.936759  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:08.936786  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:08.936795  777892 round_trippers.go:580]     Audit-Id: d16dbeab-03ab-4904-a3cf-9acf9f74b9da
	I1109 22:07:08.936802  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:08.936808  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:08.936814  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:08.936821  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:08.936832  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:08 GMT
	I1109 22:07:08.936956  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:09.434077  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:09.434102  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:09.434112  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:09.434119  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:09.437257  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:09.437292  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:09.437307  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:09.437315  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:09.437321  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:09.437329  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:09 GMT
	I1109 22:07:09.437335  777892 round_trippers.go:580]     Audit-Id: 1765ef33-564d-4542-8183-d8c99b89d2a7
	I1109 22:07:09.437343  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:09.437525  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:09.437980  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:09.933298  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:09.933335  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:09.933345  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:09.933353  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:09.935827  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:09.935852  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:09.935862  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:09 GMT
	I1109 22:07:09.935870  777892 round_trippers.go:580]     Audit-Id: 76b74288-dcad-4d04-959b-f0106a8e1824
	I1109 22:07:09.935876  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:09.935886  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:09.935899  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:09.935906  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:09.936218  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:10.433967  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:10.433990  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:10.434001  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:10.434008  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:10.436466  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:10.436485  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:10.436494  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:10.436500  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:10.436507  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:10 GMT
	I1109 22:07:10.436513  777892 round_trippers.go:580]     Audit-Id: 3ff27d7a-a28c-44cc-8894-c1ffb0254365
	I1109 22:07:10.436520  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:10.436526  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:10.436728  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:10.934363  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:10.934389  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:10.934399  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:10.934406  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:10.937011  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:10.937073  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:10.937096  777892 round_trippers.go:580]     Audit-Id: 1434e482-7b2a-457e-ab08-a6f7f743ffb3
	I1109 22:07:10.937119  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:10.937151  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:10.937181  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:10.937209  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:10.937237  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:10 GMT
	I1109 22:07:10.937335  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:11.433888  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:11.433914  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:11.433924  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:11.433932  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:11.436655  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:11.436678  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:11.436687  777892 round_trippers.go:580]     Audit-Id: ff99ce85-319d-4a3a-9b53-24a6f4a189d4
	I1109 22:07:11.436694  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:11.436700  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:11.436707  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:11.436713  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:11.436720  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:11 GMT
	I1109 22:07:11.437017  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:11.934006  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:11.934031  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:11.934047  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:11.934054  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:11.936618  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:11.936638  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:11.936646  777892 round_trippers.go:580]     Audit-Id: 42bdcafe-c0f4-4737-99aa-e9e2b6af14d4
	I1109 22:07:11.936652  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:11.936659  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:11.936665  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:11.936671  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:11.936677  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:11 GMT
	I1109 22:07:11.936822  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:11.937242  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:12.433325  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:12.433346  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:12.433356  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:12.433364  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:12.435899  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:12.435919  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:12.435927  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:12.435934  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:12.435940  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:12 GMT
	I1109 22:07:12.435946  777892 round_trippers.go:580]     Audit-Id: 94a388dc-0763-498e-8906-fb6bb2060384
	I1109 22:07:12.435952  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:12.435958  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:12.436140  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:12.933274  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:12.933298  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:12.933309  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:12.933317  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:12.935801  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:12.935824  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:12.935833  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:12 GMT
	I1109 22:07:12.935840  777892 round_trippers.go:580]     Audit-Id: 3fb2f47b-66ac-4e00-bc92-1b9789a896d8
	I1109 22:07:12.935846  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:12.935852  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:12.935858  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:12.935867  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:12.935949  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:13.434030  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:13.434054  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:13.434064  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:13.434072  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:13.436562  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:13.436587  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:13.436602  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:13.436611  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:13.436617  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:13 GMT
	I1109 22:07:13.436625  777892 round_trippers.go:580]     Audit-Id: 88b20c01-c8c0-43f9-b123-35fb62af9aa0
	I1109 22:07:13.436631  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:13.436642  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:13.436782  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:13.933914  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:13.933946  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:13.933956  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:13.933963  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:13.936407  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:13.936432  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:13.936441  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:13.936447  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:13.936454  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:13 GMT
	I1109 22:07:13.936460  777892 round_trippers.go:580]     Audit-Id: 57cdadfc-b3f1-4d3f-ada4-5e1fc68ad438
	I1109 22:07:13.936466  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:13.936473  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:13.936757  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:14.433381  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:14.433404  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:14.433414  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:14.433422  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:14.435818  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:14.435839  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:14.435847  777892 round_trippers.go:580]     Audit-Id: cf4a4c12-a008-42a9-b88d-b78e19af6d02
	I1109 22:07:14.435853  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:14.435859  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:14.435866  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:14.435876  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:14.435883  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:14 GMT
	I1109 22:07:14.436102  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:14.436516  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:14.934256  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:14.934280  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:14.934291  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:14.934298  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:14.936929  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:14.936951  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:14.936960  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:14 GMT
	I1109 22:07:14.936966  777892 round_trippers.go:580]     Audit-Id: 48524b02-abd3-4e97-b5a2-8bbe3194e4bd
	I1109 22:07:14.936973  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:14.936979  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:14.936986  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:14.936994  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:14.937200  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:15.434335  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:15.434357  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:15.434366  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:15.434374  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:15.437851  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:15.437873  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:15.437881  777892 round_trippers.go:580]     Audit-Id: 754168f8-2c69-409a-a862-66c8256be95b
	I1109 22:07:15.437888  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:15.437894  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:15.437900  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:15.437906  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:15.437912  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:15 GMT
	I1109 22:07:15.438031  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:15.934200  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:15.934226  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:15.934236  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:15.934243  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:15.936699  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:15.936724  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:15.936732  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:15.936739  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:15.936745  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:15.936753  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:15 GMT
	I1109 22:07:15.936759  777892 round_trippers.go:580]     Audit-Id: 77befa5b-1b8e-4a1b-af7a-ef9998d0dcf7
	I1109 22:07:15.936770  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:15.936900  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:16.434048  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:16.434073  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:16.434083  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:16.434091  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:16.436805  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:16.436838  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:16.436848  777892 round_trippers.go:580]     Audit-Id: fba7f51a-70ff-4f74-a218-09ed10c29973
	I1109 22:07:16.436855  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:16.436862  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:16.436873  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:16.436890  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:16.436902  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:16 GMT
	I1109 22:07:16.437036  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:16.437487  777892 node_ready.go:58] node "multinode-833232" has status "Ready":"False"
	I1109 22:07:16.933390  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:16.933414  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:16.933424  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:16.933432  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:16.935927  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:16.935947  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:16.935957  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:16.935964  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:16 GMT
	I1109 22:07:16.935970  777892 round_trippers.go:580]     Audit-Id: 4145c236-30dd-4a0d-a28e-9586edd59042
	I1109 22:07:16.935976  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:16.935982  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:16.935988  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:16.936093  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:17.434335  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:17.434360  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:17.434370  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:17.434377  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:17.436865  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:17.436883  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:17.436891  777892 round_trippers.go:580]     Audit-Id: 5f87b0a6-45cc-436f-9159-e14414aff742
	I1109 22:07:17.436898  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:17.436904  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:17.436912  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:17.436918  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:17.436924  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:17 GMT
	I1109 22:07:17.437069  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:17.933370  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:17.933391  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:17.933401  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:17.933409  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:17.935903  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:17.935928  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:17.935939  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:17.935946  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:17.935952  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:17.935959  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:17 GMT
	I1109 22:07:17.935966  777892 round_trippers.go:580]     Audit-Id: 49961031-ecfd-4a4f-964c-960d35486fdc
	I1109 22:07:17.935973  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:17.936082  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:18.434243  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:18.434267  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.434279  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.434286  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.436783  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:18.436809  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.436818  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.436825  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.436831  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.436837  777892 round_trippers.go:580]     Audit-Id: 09505736-0d87-4aff-87d3-eda41bb3ee6b
	I1109 22:07:18.436844  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.436856  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.436980  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"358","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1109 22:07:18.934077  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:18.934100  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.934110  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.934117  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.936431  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:18.936456  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.936465  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.936471  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.936477  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.936484  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.936490  777892 round_trippers.go:580]     Audit-Id: 3125dcd9-e5bc-4c97-83d0-3123b8b49aa0
	I1109 22:07:18.936500  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.936606  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:18.936998  777892 node_ready.go:49] node "multinode-833232" has status "Ready":"True"
	I1109 22:07:18.937017  777892 node_ready.go:38] duration metric: took 31.511914489s waiting for node "multinode-833232" to be "Ready" ...
	I1109 22:07:18.937027  777892 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 22:07:18.937092  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:07:18.937103  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.937110  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.937117  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.940515  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:18.940536  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.940544  777892 round_trippers.go:580]     Audit-Id: 22992226-2b65-42bf-b50e-396bd3fec09d
	I1109 22:07:18.940551  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.940557  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.940563  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.940569  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.940576  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.940942  777892 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"439","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55533 chars]
	I1109 22:07:18.945034  777892 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:18.945152  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kr4mg
	I1109 22:07:18.945182  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.945204  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.945222  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.948004  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:18.948029  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.948038  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.948045  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.948051  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.948058  777892 round_trippers.go:580]     Audit-Id: a113ffbf-bc88-44f7-b509-2a3de9fec644
	I1109 22:07:18.948064  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.948073  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.948190  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"439","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1109 22:07:18.948710  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:18.948725  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.948733  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.948740  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.950976  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:18.950999  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.951007  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.951014  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.951020  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.951026  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.951032  777892 round_trippers.go:580]     Audit-Id: 6059cfae-f482-44da-9388-f627b658e471
	I1109 22:07:18.951039  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.951176  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:18.951611  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kr4mg
	I1109 22:07:18.951625  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.951634  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.951641  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.955081  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:18.955112  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.955121  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.955128  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.955134  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.955140  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.955147  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.955157  777892 round_trippers.go:580]     Audit-Id: 7e5230fe-5282-4115-8eb9-cc7a6fbf45a2
	I1109 22:07:18.955278  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"439","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1109 22:07:18.955818  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:18.955833  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:18.955842  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:18.955849  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:18.958636  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:18.958666  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:18.958675  777892 round_trippers.go:580]     Audit-Id: 16fe5364-23a6-4ee9-bc1e-78b64e3f7813
	I1109 22:07:18.958681  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:18.958688  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:18.958697  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:18.958711  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:18.958717  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:18 GMT
	I1109 22:07:18.958852  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:19.459563  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kr4mg
	I1109 22:07:19.459586  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.459596  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.459604  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.462178  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.462245  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.462267  777892 round_trippers.go:580]     Audit-Id: 6ba95628-0e81-4ff2-847a-4f6c7c92a0fe
	I1109 22:07:19.462289  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.462338  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.462363  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.462384  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.462403  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.462865  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"439","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1109 22:07:19.463387  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:19.463398  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.463406  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.463412  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.473957  777892 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1109 22:07:19.474014  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.474034  777892 round_trippers.go:580]     Audit-Id: 53072987-c58e-48b9-8733-d51149a464f9
	I1109 22:07:19.474055  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.474096  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.474119  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.474137  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.474157  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.474344  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:19.959531  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kr4mg
	I1109 22:07:19.959570  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.959581  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.959590  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.962089  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.962126  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.962137  777892 round_trippers.go:580]     Audit-Id: e3af13c4-34e5-4121-8208-b14179631f7c
	I1109 22:07:19.962144  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.962150  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.962156  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.962163  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.962171  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.962407  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"451","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1109 22:07:19.962936  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:19.962956  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.962966  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.962973  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.965396  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.965416  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.965424  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.965469  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.965480  777892 round_trippers.go:580]     Audit-Id: bf18c9e5-5d6b-47da-af22-ed5d6f9c3077
	I1109 22:07:19.965486  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.965492  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.965498  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.965670  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:19.966081  777892 pod_ready.go:92] pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:19.966101  777892 pod_ready.go:81] duration metric: took 1.021037854s waiting for pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.966113  777892 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.966175  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-833232
	I1109 22:07:19.966185  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.966193  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.966202  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.968482  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.968502  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.968510  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.968517  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.968523  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.968529  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.968536  777892 round_trippers.go:580]     Audit-Id: c07af7ef-e0ac-4f0f-b14e-1a66c3b878fa
	I1109 22:07:19.968542  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.968718  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-833232","namespace":"kube-system","uid":"1b3a5828-6fa1-43ef-9fe5-0bd827bc607c","resourceVersion":"422","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"bcb0d7444037668b0544684a5f617409","kubernetes.io/config.mirror":"bcb0d7444037668b0544684a5f617409","kubernetes.io/config.seen":"2023-11-09T22:06:33.633002538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1109 22:07:19.969178  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:19.969197  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.969206  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.969213  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.971500  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.971554  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.971591  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.971621  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.971641  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.971655  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.971662  777892 round_trippers.go:580]     Audit-Id: fcd7c241-59a0-429b-80a5-f46bb5ea353d
	I1109 22:07:19.971668  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.971798  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:19.972204  777892 pod_ready.go:92] pod "etcd-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:19.972223  777892 pod_ready.go:81] duration metric: took 6.097514ms waiting for pod "etcd-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.972238  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.972302  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-833232
	I1109 22:07:19.972312  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.972320  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.972327  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.974631  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.974654  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.974665  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.974672  777892 round_trippers.go:580]     Audit-Id: fb054fba-abd9-46f2-87e5-7f19a6108939
	I1109 22:07:19.974678  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.974685  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.974692  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.974702  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.974829  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-833232","namespace":"kube-system","uid":"ac0a37a2-9eb3-4caa-9e04-eb883448846a","resourceVersion":"423","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ece6c8a9968fab733b8b5674f1e0f3b3","kubernetes.io/config.mirror":"ece6c8a9968fab733b8b5674f1e0f3b3","kubernetes.io/config.seen":"2023-11-09T22:06:33.632994809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1109 22:07:19.975354  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:19.975375  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.975384  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.975391  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.977562  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.977580  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.977587  777892 round_trippers.go:580]     Audit-Id: 1417d4bf-eb59-4ec9-a9e4-fe88093ee2a3
	I1109 22:07:19.977594  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.977600  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.977607  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.977617  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.977623  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.977779  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:19.978189  777892 pod_ready.go:92] pod "kube-apiserver-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:19.978208  777892 pod_ready.go:81] duration metric: took 5.960153ms waiting for pod "kube-apiserver-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.978226  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:19.978296  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-833232
	I1109 22:07:19.978304  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:19.978330  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:19.978338  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:19.980535  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:19.980555  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:19.980564  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:19.980570  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:19.980576  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:19.980583  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:19 GMT
	I1109 22:07:19.980593  777892 round_trippers.go:580]     Audit-Id: b1eecdae-0a00-4757-b834-849c9343acbd
	I1109 22:07:19.980605  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:19.980817  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-833232","namespace":"kube-system","uid":"c145c0c9-2759-4085-8766-b69466b0ae80","resourceVersion":"424","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85621c7f3e0293e83befbe0eda8a3b19","kubernetes.io/config.mirror":"85621c7f3e0293e83befbe0eda8a3b19","kubernetes.io/config.seen":"2023-11-09T22:06:25.611885873Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1109 22:07:20.134673  777892 request.go:629] Waited for 153.284656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.134790  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.134804  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.134814  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.134822  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.137322  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:20.137390  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.137405  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.137412  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.137419  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.137430  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.137438  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.137460  777892 round_trippers.go:580]     Audit-Id: 89e5a7fa-19e2-4870-afd4-521864d3bc6a
	I1109 22:07:20.137700  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:20.138134  777892 pod_ready.go:92] pod "kube-controller-manager-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:20.138152  777892 pod_ready.go:81] duration metric: took 159.915435ms waiting for pod "kube-controller-manager-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:20.138164  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgbc8" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:20.334588  777892 request.go:629] Waited for 196.362701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgbc8
	I1109 22:07:20.334690  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgbc8
	I1109 22:07:20.334711  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.334741  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.334754  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.337329  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:20.337363  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.337372  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.337379  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.337385  777892 round_trippers.go:580]     Audit-Id: c292f212-c5b6-4084-945c-b7b5da5b7bbe
	I1109 22:07:20.337391  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.337398  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.337404  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.337641  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"51c0aad4-80b1-47a7-9a64-07cef5c5b95f","resourceVersion":"418","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b65e1464-d3a2-48a3-b16f-bf49038c0975","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b65e1464-d3a2-48a3-b16f-bf49038c0975\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1109 22:07:20.534452  777892 request.go:629] Waited for 196.315005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.534508  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.534514  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.534523  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.534535  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.536977  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:20.537060  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.537069  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.537087  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.537098  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.537105  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.537114  777892 round_trippers.go:580]     Audit-Id: 51c6c73c-7d1f-4c13-ae60-2c5841dc5641
	I1109 22:07:20.537120  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.537237  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:20.537676  777892 pod_ready.go:92] pod "kube-proxy-jgbc8" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:20.537696  777892 pod_ready.go:81] duration metric: took 399.525304ms waiting for pod "kube-proxy-jgbc8" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:20.537707  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:20.735070  777892 request.go:629] Waited for 197.304685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-833232
	I1109 22:07:20.735135  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-833232
	I1109 22:07:20.735144  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.735174  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.735181  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.737593  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:20.737619  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.737628  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.737635  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.737642  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.737648  777892 round_trippers.go:580]     Audit-Id: a50e296e-b860-48af-b9e5-194112d40689
	I1109 22:07:20.737657  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.737669  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.737983  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-833232","namespace":"kube-system","uid":"2c24f114-7915-434c-a183-7dfd0695543e","resourceVersion":"425","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9df1ddff0806f6f72d247e55c05e117c","kubernetes.io/config.mirror":"9df1ddff0806f6f72d247e55c05e117c","kubernetes.io/config.seen":"2023-11-09T22:06:33.633001357Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1109 22:07:20.934846  777892 request.go:629] Waited for 196.337774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.934930  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:07:20.934944  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.934954  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.934962  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.937423  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:20.937450  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.937466  777892 round_trippers.go:580]     Audit-Id: 6c5f6105-9327-4949-8657-8999ad88c352
	I1109 22:07:20.937473  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.937480  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.937490  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.937498  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.937510  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.937649  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:07:20.938069  777892 pod_ready.go:92] pod "kube-scheduler-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:07:20.938089  777892 pod_ready.go:81] duration metric: took 400.374447ms waiting for pod "kube-scheduler-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:07:20.938103  777892 pod_ready.go:38] duration metric: took 2.001058836s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 22:07:20.938122  777892 api_server.go:52] waiting for apiserver process to appear ...
	I1109 22:07:20.938184  777892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 22:07:20.949903  777892 command_runner.go:130] > 1279
	I1109 22:07:20.951264  777892 api_server.go:72] duration metric: took 33.718386455s to wait for apiserver process to appear ...
	I1109 22:07:20.951315  777892 api_server.go:88] waiting for apiserver healthz status ...
	I1109 22:07:20.951338  777892 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1109 22:07:20.961363  777892 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1109 22:07:20.961444  777892 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1109 22:07:20.961460  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:20.961470  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:20.961478  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:20.962781  777892 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 22:07:20.962802  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:20.962811  777892 round_trippers.go:580]     Audit-Id: 6f7c95b2-1379-4343-b7c9-8abf70ad0425
	I1109 22:07:20.962818  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:20.962824  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:20.962830  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:20.962837  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:20.962846  777892 round_trippers.go:580]     Content-Length: 264
	I1109 22:07:20.962852  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:20 GMT
	I1109 22:07:20.962872  777892 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1109 22:07:20.962964  777892 api_server.go:141] control plane version: v1.28.3
	I1109 22:07:20.962984  777892 api_server.go:131] duration metric: took 11.656801ms to wait for apiserver health ...
	I1109 22:07:20.962991  777892 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 22:07:21.134398  777892 request.go:629] Waited for 171.32537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:07:21.134464  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:07:21.134470  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:21.134479  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:21.134491  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:21.138104  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:21.138182  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:21.138202  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:21.138210  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:21.138218  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:21 GMT
	I1109 22:07:21.138225  777892 round_trippers.go:580]     Audit-Id: 3ac3689a-515c-4e37-b714-7bdac6983c45
	I1109 22:07:21.138231  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:21.138269  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:21.138625  777892 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"451","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1109 22:07:21.141084  777892 system_pods.go:59] 8 kube-system pods found
	I1109 22:07:21.141120  777892 system_pods.go:61] "coredns-5dd5756b68-kr4mg" [888d0cf3-ae53-45a9-bfc5-dae176b2f1b4] Running
	I1109 22:07:21.141129  777892 system_pods.go:61] "etcd-multinode-833232" [1b3a5828-6fa1-43ef-9fe5-0bd827bc607c] Running
	I1109 22:07:21.141133  777892 system_pods.go:61] "kindnet-vdwtv" [b34c0ee0-70b5-485d-8116-5a79eb0c520f] Running
	I1109 22:07:21.141140  777892 system_pods.go:61] "kube-apiserver-multinode-833232" [ac0a37a2-9eb3-4caa-9e04-eb883448846a] Running
	I1109 22:07:21.141146  777892 system_pods.go:61] "kube-controller-manager-multinode-833232" [c145c0c9-2759-4085-8766-b69466b0ae80] Running
	I1109 22:07:21.141153  777892 system_pods.go:61] "kube-proxy-jgbc8" [51c0aad4-80b1-47a7-9a64-07cef5c5b95f] Running
	I1109 22:07:21.141161  777892 system_pods.go:61] "kube-scheduler-multinode-833232" [2c24f114-7915-434c-a183-7dfd0695543e] Running
	I1109 22:07:21.141171  777892 system_pods.go:61] "storage-provisioner" [5135cf21-5a1c-4fd7-a69e-887e1bccbe91] Running
	I1109 22:07:21.141178  777892 system_pods.go:74] duration metric: took 178.180065ms to wait for pod list to return data ...
	I1109 22:07:21.141191  777892 default_sa.go:34] waiting for default service account to be created ...
	I1109 22:07:21.334648  777892 request.go:629] Waited for 193.385586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1109 22:07:21.334729  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1109 22:07:21.334736  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:21.334745  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:21.334752  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:21.337349  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:21.337415  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:21.337430  777892 round_trippers.go:580]     Audit-Id: fda1f712-7c83-455b-b8b0-9dc595230954
	I1109 22:07:21.337450  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:21.337457  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:21.337467  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:21.337474  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:21.337483  777892 round_trippers.go:580]     Content-Length: 261
	I1109 22:07:21.337489  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:21 GMT
	I1109 22:07:21.337529  777892 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"59582f0b-a5ac-4851-bd9b-7cd834902506","resourceVersion":"353","creationTimestamp":"2023-11-09T22:06:46Z"}}]}
	I1109 22:07:21.337740  777892 default_sa.go:45] found service account: "default"
	I1109 22:07:21.337759  777892 default_sa.go:55] duration metric: took 196.561281ms for default service account to be created ...
	I1109 22:07:21.337768  777892 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 22:07:21.534115  777892 request.go:629] Waited for 196.280199ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:07:21.534192  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:07:21.534199  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:21.534214  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:21.534223  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:21.537691  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:21.537714  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:21.537722  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:21 GMT
	I1109 22:07:21.537728  777892 round_trippers.go:580]     Audit-Id: 99cedc86-01d7-40cd-ba49-3d757ce223d5
	I1109 22:07:21.537735  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:21.537745  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:21.537753  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:21.537759  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:21.538411  777892 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"451","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1109 22:07:21.540875  777892 system_pods.go:86] 8 kube-system pods found
	I1109 22:07:21.540907  777892 system_pods.go:89] "coredns-5dd5756b68-kr4mg" [888d0cf3-ae53-45a9-bfc5-dae176b2f1b4] Running
	I1109 22:07:21.540915  777892 system_pods.go:89] "etcd-multinode-833232" [1b3a5828-6fa1-43ef-9fe5-0bd827bc607c] Running
	I1109 22:07:21.540927  777892 system_pods.go:89] "kindnet-vdwtv" [b34c0ee0-70b5-485d-8116-5a79eb0c520f] Running
	I1109 22:07:21.540932  777892 system_pods.go:89] "kube-apiserver-multinode-833232" [ac0a37a2-9eb3-4caa-9e04-eb883448846a] Running
	I1109 22:07:21.540942  777892 system_pods.go:89] "kube-controller-manager-multinode-833232" [c145c0c9-2759-4085-8766-b69466b0ae80] Running
	I1109 22:07:21.540949  777892 system_pods.go:89] "kube-proxy-jgbc8" [51c0aad4-80b1-47a7-9a64-07cef5c5b95f] Running
	I1109 22:07:21.540955  777892 system_pods.go:89] "kube-scheduler-multinode-833232" [2c24f114-7915-434c-a183-7dfd0695543e] Running
	I1109 22:07:21.540959  777892 system_pods.go:89] "storage-provisioner" [5135cf21-5a1c-4fd7-a69e-887e1bccbe91] Running
	I1109 22:07:21.540966  777892 system_pods.go:126] duration metric: took 203.193659ms to wait for k8s-apps to be running ...
	I1109 22:07:21.540976  777892 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 22:07:21.541041  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:07:21.554962  777892 system_svc.go:56] duration metric: took 13.975654ms WaitForService to wait for kubelet.
	I1109 22:07:21.555027  777892 kubeadm.go:581] duration metric: took 34.322155184s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 22:07:21.555052  777892 node_conditions.go:102] verifying NodePressure condition ...
	I1109 22:07:21.734422  777892 request.go:629] Waited for 179.301929ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1109 22:07:21.734495  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1109 22:07:21.734506  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:21.734515  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:21.734523  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:21.737179  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:21.737229  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:21.737255  777892 round_trippers.go:580]     Audit-Id: 35d5b99b-a8ed-4030-9713-a7181423ee15
	I1109 22:07:21.737268  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:21.737280  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:21.737288  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:21.737295  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:21.737301  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:21 GMT
	I1109 22:07:21.737403  777892 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"457"},"items":[{"metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1109 22:07:21.737850  777892 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 22:07:21.737875  777892 node_conditions.go:123] node cpu capacity is 2
	I1109 22:07:21.737886  777892 node_conditions.go:105] duration metric: took 182.828644ms to run NodePressure ...
	I1109 22:07:21.737897  777892 start.go:228] waiting for startup goroutines ...
	I1109 22:07:21.737906  777892 start.go:233] waiting for cluster config update ...
	I1109 22:07:21.737916  777892 start.go:242] writing updated cluster config ...
	I1109 22:07:21.741148  777892 out.go:177] 
	I1109 22:07:21.743137  777892 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:07:21.743256  777892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json ...
	I1109 22:07:21.745587  777892 out.go:177] * Starting worker node multinode-833232-m02 in cluster multinode-833232
	I1109 22:07:21.747490  777892 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 22:07:21.749576  777892 out.go:177] * Pulling base image ...
	I1109 22:07:21.752304  777892 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 22:07:21.752337  777892 cache.go:56] Caching tarball of preloaded images
	I1109 22:07:21.752384  777892 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 22:07:21.752449  777892 preload.go:174] Found /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 22:07:21.752460  777892 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1109 22:07:21.752563  777892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json ...
	I1109 22:07:21.769288  777892 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1109 22:07:21.769311  777892 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1109 22:07:21.769336  777892 cache.go:194] Successfully downloaded all kic artifacts
	I1109 22:07:21.769367  777892 start.go:365] acquiring machines lock for multinode-833232-m02: {Name:mk05027ff46c0aa8c1f88ff8065e00b3874137d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:07:21.769487  777892 start.go:369] acquired machines lock for "multinode-833232-m02" in 102.744µs
	I1109 22:07:21.769513  777892 start.go:93] Provisioning new machine with config: &{Name:multinode-833232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 22:07:21.769585  777892 start.go:125] createHost starting for "m02" (driver="docker")
	I1109 22:07:21.772407  777892 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 22:07:21.772529  777892 start.go:159] libmachine.API.Create for "multinode-833232" (driver="docker")
	I1109 22:07:21.772552  777892 client.go:168] LocalClient.Create starting
	I1109 22:07:21.772613  777892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 22:07:21.772648  777892 main.go:141] libmachine: Decoding PEM data...
	I1109 22:07:21.772669  777892 main.go:141] libmachine: Parsing certificate...
	I1109 22:07:21.772724  777892 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 22:07:21.772745  777892 main.go:141] libmachine: Decoding PEM data...
	I1109 22:07:21.772756  777892 main.go:141] libmachine: Parsing certificate...
	I1109 22:07:21.773071  777892 cli_runner.go:164] Run: docker network inspect multinode-833232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:07:21.791529  777892 network_create.go:77] Found existing network {name:multinode-833232 subnet:0x4002d50ea0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1109 22:07:21.791570  777892 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-833232-m02" container
	I1109 22:07:21.791646  777892 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 22:07:21.813933  777892 cli_runner.go:164] Run: docker volume create multinode-833232-m02 --label name.minikube.sigs.k8s.io=multinode-833232-m02 --label created_by.minikube.sigs.k8s.io=true
	I1109 22:07:21.832288  777892 oci.go:103] Successfully created a docker volume multinode-833232-m02
	I1109 22:07:21.832374  777892 cli_runner.go:164] Run: docker run --rm --name multinode-833232-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-833232-m02 --entrypoint /usr/bin/test -v multinode-833232-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1109 22:07:22.413161  777892 oci.go:107] Successfully prepared a docker volume multinode-833232-m02
	I1109 22:07:22.413198  777892 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 22:07:22.413219  777892 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 22:07:22.413299  777892 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-833232-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 22:07:26.767010  777892 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-833232-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.353666624s)
	I1109 22:07:26.767040  777892 kic.go:203] duration metric: took 4.353819 seconds to extract preloaded images to volume
	W1109 22:07:26.767183  777892 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 22:07:26.767287  777892 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 22:07:26.839827  777892 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-833232-m02 --name multinode-833232-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-833232-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-833232-m02 --network multinode-833232 --ip 192.168.58.3 --volume multinode-833232-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1109 22:07:27.198785  777892 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Running}}
	I1109 22:07:27.223532  777892 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Status}}
	I1109 22:07:27.252812  777892 cli_runner.go:164] Run: docker exec multinode-833232-m02 stat /var/lib/dpkg/alternatives/iptables
	I1109 22:07:27.334584  777892 oci.go:144] the created container "multinode-833232-m02" has a running status.
	I1109 22:07:27.334610  777892 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa...
	I1109 22:07:28.638997  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 22:07:28.639050  777892 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 22:07:28.660580  777892 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Status}}
	I1109 22:07:28.679076  777892 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 22:07:28.679097  777892 kic_runner.go:114] Args: [docker exec --privileged multinode-833232-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 22:07:28.744447  777892 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Status}}
	I1109 22:07:28.763579  777892 machine.go:88] provisioning docker machine ...
	I1109 22:07:28.763618  777892 ubuntu.go:169] provisioning hostname "multinode-833232-m02"
	I1109 22:07:28.763687  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:28.781158  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:07:28.781589  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33755 <nil> <nil>}
	I1109 22:07:28.781606  777892 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-833232-m02 && echo "multinode-833232-m02" | sudo tee /etc/hostname
	I1109 22:07:28.937553  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-833232-m02
	
	I1109 22:07:28.937630  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:28.956234  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:07:28.956722  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33755 <nil> <nil>}
	I1109 22:07:28.956746  777892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-833232-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-833232-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-833232-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:07:29.099457  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:07:29.099482  777892 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:07:29.099497  777892 ubuntu.go:177] setting up certificates
	I1109 22:07:29.099506  777892 provision.go:83] configureAuth start
	I1109 22:07:29.099571  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232-m02
	I1109 22:07:29.118220  777892 provision.go:138] copyHostCerts
	I1109 22:07:29.118257  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:07:29.118292  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:07:29.118299  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:07:29.118544  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:07:29.118649  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:07:29.118674  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:07:29.118680  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:07:29.118716  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:07:29.118795  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:07:29.118829  777892 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:07:29.118835  777892 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:07:29.118861  777892 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:07:29.118910  777892 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.multinode-833232-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-833232-m02]
	I1109 22:07:29.593714  777892 provision.go:172] copyRemoteCerts
	I1109 22:07:29.593788  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:07:29.593835  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:29.612107  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:07:29.712703  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 22:07:29.712760  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:07:29.740175  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 22:07:29.740232  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1109 22:07:29.766784  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 22:07:29.766841  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 22:07:29.795924  777892 provision.go:86] duration metric: configureAuth took 696.406046ms
	I1109 22:07:29.795948  777892 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:07:29.796136  777892 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:07:29.796237  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:29.813729  777892 main.go:141] libmachine: Using SSH client type: native
	I1109 22:07:29.814198  777892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33755 <nil> <nil>}
	I1109 22:07:29.814219  777892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:07:30.088324  777892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:07:30.088395  777892 machine.go:91] provisioned docker machine in 1.324785332s
	I1109 22:07:30.088420  777892 client.go:171] LocalClient.Create took 8.315860615s
	I1109 22:07:30.088454  777892 start.go:167] duration metric: libmachine.API.Create for "multinode-833232" took 8.315925361s
	I1109 22:07:30.088464  777892 start.go:300] post-start starting for "multinode-833232-m02" (driver="docker")
	I1109 22:07:30.088474  777892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:07:30.088565  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:07:30.088619  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:30.112413  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:07:30.219072  777892 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:07:30.223427  777892 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1109 22:07:30.223447  777892 command_runner.go:130] > NAME="Ubuntu"
	I1109 22:07:30.223454  777892 command_runner.go:130] > VERSION_ID="22.04"
	I1109 22:07:30.223460  777892 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1109 22:07:30.223466  777892 command_runner.go:130] > VERSION_CODENAME=jammy
	I1109 22:07:30.223470  777892 command_runner.go:130] > ID=ubuntu
	I1109 22:07:30.223475  777892 command_runner.go:130] > ID_LIKE=debian
	I1109 22:07:30.223483  777892 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 22:07:30.223489  777892 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 22:07:30.223506  777892 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 22:07:30.223514  777892 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 22:07:30.223520  777892 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1109 22:07:30.223573  777892 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:07:30.223597  777892 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:07:30.223607  777892 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:07:30.223614  777892 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1109 22:07:30.223625  777892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:07:30.223687  777892 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:07:30.223761  777892 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:07:30.223768  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /etc/ssl/certs/7135732.pem
	I1109 22:07:30.223867  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:07:30.235247  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:07:30.264021  777892 start.go:303] post-start completed in 175.541827ms
	I1109 22:07:30.264405  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232-m02
	I1109 22:07:30.285617  777892 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/config.json ...
	I1109 22:07:30.285899  777892 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:07:30.285948  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:30.305029  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:07:30.404218  777892 command_runner.go:130] > 11%!
	(MISSING)I1109 22:07:30.404819  777892 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:07:30.410392  777892 command_runner.go:130] > 173G
	I1109 22:07:30.410952  777892 start.go:128] duration metric: createHost completed in 8.641355401s
	I1109 22:07:30.410970  777892 start.go:83] releasing machines lock for "multinode-833232-m02", held for 8.64147371s
	I1109 22:07:30.411062  777892 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232-m02
	I1109 22:07:30.438345  777892 out.go:177] * Found network options:
	I1109 22:07:30.440425  777892 out.go:177]   - NO_PROXY=192.168.58.2
	W1109 22:07:30.442339  777892 proxy.go:119] fail to check proxy env: Error ip not in block
	W1109 22:07:30.442378  777892 proxy.go:119] fail to check proxy env: Error ip not in block
	I1109 22:07:30.442448  777892 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 22:07:30.442497  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:30.442762  777892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 22:07:30.442821  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:07:30.463295  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:07:30.464371  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:07:30.721322  777892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 22:07:30.721412  777892 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 22:07:30.726559  777892 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1109 22:07:30.726583  777892 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1109 22:07:30.726594  777892 command_runner.go:130] > Device: b3h/179d	Inode: 1823289     Links: 1
	I1109 22:07:30.726601  777892 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:07:30.726608  777892 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1109 22:07:30.726614  777892 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1109 22:07:30.726620  777892 command_runner.go:130] > Change: 2023-11-09 21:28:21.090111595 +0000
	I1109 22:07:30.726626  777892 command_runner.go:130] >  Birth: 2023-11-09 21:28:21.090111595 +0000
	I1109 22:07:30.726962  777892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:07:30.751728  777892 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 22:07:30.751845  777892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:07:30.795236  777892 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1109 22:07:30.795337  777892 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1109 22:07:30.795361  777892 start.go:472] detecting cgroup driver to use...
	I1109 22:07:30.795418  777892 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 22:07:30.795497  777892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 22:07:30.813967  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 22:07:30.827463  777892 docker.go:203] disabling cri-docker service (if available) ...
	I1109 22:07:30.827533  777892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 22:07:30.843669  777892 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 22:07:30.860079  777892 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 22:07:30.954537  777892 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 22:07:31.068448  777892 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1109 22:07:31.068474  777892 docker.go:219] disabling docker service ...
	I1109 22:07:31.068528  777892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 22:07:31.091334  777892 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 22:07:31.113451  777892 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 22:07:31.214555  777892 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1109 22:07:31.214625  777892 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 22:07:31.330516  777892 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1109 22:07:31.330838  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 22:07:31.348426  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 22:07:31.367663  777892 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1109 22:07:31.369034  777892 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 22:07:31.369124  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:07:31.382732  777892 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 22:07:31.382825  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:07:31.395331  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:07:31.407089  777892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:07:31.418923  777892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 22:07:31.431164  777892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 22:07:31.440923  777892 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1109 22:07:31.442278  777892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 22:07:31.452923  777892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 22:07:31.548149  777892 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 22:07:31.657262  777892 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 22:07:31.657357  777892 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 22:07:31.663117  777892 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1109 22:07:31.663185  777892 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 22:07:31.663207  777892 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1109 22:07:31.663232  777892 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:07:31.663253  777892 command_runner.go:130] > Access: 2023-11-09 22:07:31.639522029 +0000
	I1109 22:07:31.663280  777892 command_runner.go:130] > Modify: 2023-11-09 22:07:31.639522029 +0000
	I1109 22:07:31.663301  777892 command_runner.go:130] > Change: 2023-11-09 22:07:31.639522029 +0000
	I1109 22:07:31.663320  777892 command_runner.go:130] >  Birth: -
	I1109 22:07:31.663360  777892 start.go:540] Will wait 60s for crictl version
	I1109 22:07:31.663442  777892 ssh_runner.go:195] Run: which crictl
	I1109 22:07:31.667461  777892 command_runner.go:130] > /usr/bin/crictl
	I1109 22:07:31.667903  777892 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1109 22:07:31.709338  777892 command_runner.go:130] > Version:  0.1.0
	I1109 22:07:31.709731  777892 command_runner.go:130] > RuntimeName:  cri-o
	I1109 22:07:31.709957  777892 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1109 22:07:31.710165  777892 command_runner.go:130] > RuntimeApiVersion:  v1
	I1109 22:07:31.713059  777892 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1109 22:07:31.713196  777892 ssh_runner.go:195] Run: crio --version
	I1109 22:07:31.756331  777892 command_runner.go:130] > crio version 1.24.6
	I1109 22:07:31.756409  777892 command_runner.go:130] > Version:          1.24.6
	I1109 22:07:31.756446  777892 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1109 22:07:31.756466  777892 command_runner.go:130] > GitTreeState:     clean
	I1109 22:07:31.756532  777892 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1109 22:07:31.756559  777892 command_runner.go:130] > GoVersion:        go1.18.2
	I1109 22:07:31.756588  777892 command_runner.go:130] > Compiler:         gc
	I1109 22:07:31.756623  777892 command_runner.go:130] > Platform:         linux/arm64
	I1109 22:07:31.756656  777892 command_runner.go:130] > Linkmode:         dynamic
	I1109 22:07:31.756688  777892 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1109 22:07:31.756719  777892 command_runner.go:130] > SeccompEnabled:   true
	I1109 22:07:31.756739  777892 command_runner.go:130] > AppArmorEnabled:  false
	I1109 22:07:31.759207  777892 ssh_runner.go:195] Run: crio --version
	I1109 22:07:31.801737  777892 command_runner.go:130] > crio version 1.24.6
	I1109 22:07:31.801759  777892 command_runner.go:130] > Version:          1.24.6
	I1109 22:07:31.801768  777892 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1109 22:07:31.801773  777892 command_runner.go:130] > GitTreeState:     clean
	I1109 22:07:31.801780  777892 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1109 22:07:31.801788  777892 command_runner.go:130] > GoVersion:        go1.18.2
	I1109 22:07:31.801794  777892 command_runner.go:130] > Compiler:         gc
	I1109 22:07:31.801800  777892 command_runner.go:130] > Platform:         linux/arm64
	I1109 22:07:31.801806  777892 command_runner.go:130] > Linkmode:         dynamic
	I1109 22:07:31.801820  777892 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1109 22:07:31.801825  777892 command_runner.go:130] > SeccompEnabled:   true
	I1109 22:07:31.801833  777892 command_runner.go:130] > AppArmorEnabled:  false
	I1109 22:07:31.805863  777892 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1109 22:07:31.807830  777892 out.go:177]   - env NO_PROXY=192.168.58.2
	I1109 22:07:31.809998  777892 cli_runner.go:164] Run: docker network inspect multinode-833232 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:07:31.829454  777892 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1109 22:07:31.834398  777892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 22:07:31.847426  777892 certs.go:56] Setting up /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232 for IP: 192.168.58.3
	I1109 22:07:31.847458  777892 certs.go:190] acquiring lock for shared ca certs: {Name:mk44b1a46a3acda84ddb5040e7a20ebcace98935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 22:07:31.847585  777892 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key
	I1109 22:07:31.847621  777892 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key
	I1109 22:07:31.847631  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 22:07:31.847646  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 22:07:31.847657  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 22:07:31.847668  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 22:07:31.847722  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem (1338 bytes)
	W1109 22:07:31.847755  777892 certs.go:433] ignoring /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573_empty.pem, impossibly tiny 0 bytes
	I1109 22:07:31.847765  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 22:07:31.847790  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem (1078 bytes)
	I1109 22:07:31.847814  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem (1123 bytes)
	I1109 22:07:31.847837  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem (1679 bytes)
	I1109 22:07:31.847879  777892 certs.go:437] found cert: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:07:31.847906  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> /usr/share/ca-certificates/7135732.pem
	I1109 22:07:31.847919  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:07:31.847931  777892 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem -> /usr/share/ca-certificates/713573.pem
	I1109 22:07:31.848266  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 22:07:31.875598  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 22:07:31.902797  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 22:07:31.930235  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 22:07:31.959081  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /usr/share/ca-certificates/7135732.pem (1708 bytes)
	I1109 22:07:31.987762  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 22:07:32.023338  777892 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/713573.pem --> /usr/share/ca-certificates/713573.pem (1338 bytes)
	I1109 22:07:32.051751  777892 ssh_runner.go:195] Run: openssl version
	I1109 22:07:32.058631  777892 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1109 22:07:32.058954  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7135732.pem && ln -fs /usr/share/ca-certificates/7135732.pem /etc/ssl/certs/7135732.pem"
	I1109 22:07:32.070726  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7135732.pem
	I1109 22:07:32.075274  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 22:07:32.075308  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov  9 21:41 /usr/share/ca-certificates/7135732.pem
	I1109 22:07:32.075371  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7135732.pem
	I1109 22:07:32.083305  777892 command_runner.go:130] > 3ec20f2e
	I1109 22:07:32.083727  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7135732.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 22:07:32.094997  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 22:07:32.106941  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:07:32.111391  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:07:32.111609  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov  9 21:28 /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:07:32.111668  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 22:07:32.119914  777892 command_runner.go:130] > b5213941
	I1109 22:07:32.120053  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 22:07:32.132530  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/713573.pem && ln -fs /usr/share/ca-certificates/713573.pem /etc/ssl/certs/713573.pem"
	I1109 22:07:32.144150  777892 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/713573.pem
	I1109 22:07:32.148581  777892 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 22:07:32.148988  777892 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov  9 21:41 /usr/share/ca-certificates/713573.pem
	I1109 22:07:32.149049  777892 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/713573.pem
	I1109 22:07:32.157001  777892 command_runner.go:130] > 51391683
	I1109 22:07:32.157417  777892 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/713573.pem /etc/ssl/certs/51391683.0"
	I1109 22:07:32.168585  777892 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1109 22:07:32.172839  777892 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 22:07:32.172875  777892 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1109 22:07:32.172965  777892 ssh_runner.go:195] Run: crio config
	I1109 22:07:32.222470  777892 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1109 22:07:32.222494  777892 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1109 22:07:32.222503  777892 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1109 22:07:32.222507  777892 command_runner.go:130] > #
	I1109 22:07:32.222516  777892 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1109 22:07:32.222523  777892 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1109 22:07:32.222533  777892 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1109 22:07:32.222551  777892 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1109 22:07:32.222556  777892 command_runner.go:130] > # reload'.
	I1109 22:07:32.222564  777892 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1109 22:07:32.222575  777892 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1109 22:07:32.222583  777892 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1109 22:07:32.222595  777892 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1109 22:07:32.222602  777892 command_runner.go:130] > [crio]
	I1109 22:07:32.222613  777892 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1109 22:07:32.222620  777892 command_runner.go:130] > # containers images, in this directory.
	I1109 22:07:32.222637  777892 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1109 22:07:32.222652  777892 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1109 22:07:32.222842  777892 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1109 22:07:32.222859  777892 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1109 22:07:32.222867  777892 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1109 22:07:32.222873  777892 command_runner.go:130] > # storage_driver = "vfs"
	I1109 22:07:32.222883  777892 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1109 22:07:32.222892  777892 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1109 22:07:32.222897  777892 command_runner.go:130] > # storage_option = [
	I1109 22:07:32.223076  777892 command_runner.go:130] > # ]
	I1109 22:07:32.223091  777892 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1109 22:07:32.223099  777892 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1109 22:07:32.223104  777892 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1109 22:07:32.223114  777892 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1109 22:07:32.223124  777892 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1109 22:07:32.223130  777892 command_runner.go:130] > # always happen on a node reboot
	I1109 22:07:32.223142  777892 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1109 22:07:32.223150  777892 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1109 22:07:32.223160  777892 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1109 22:07:32.223169  777892 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1109 22:07:32.223180  777892 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1109 22:07:32.223189  777892 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1109 22:07:32.223199  777892 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1109 22:07:32.223210  777892 command_runner.go:130] > # internal_wipe = true
	I1109 22:07:32.223218  777892 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1109 22:07:32.223226  777892 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1109 22:07:32.223236  777892 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1109 22:07:32.223243  777892 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1109 22:07:32.223256  777892 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1109 22:07:32.223261  777892 command_runner.go:130] > [crio.api]
	I1109 22:07:32.223273  777892 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1109 22:07:32.223279  777892 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1109 22:07:32.223286  777892 command_runner.go:130] > # IP address on which the stream server will listen.
	I1109 22:07:32.223294  777892 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1109 22:07:32.223302  777892 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1109 22:07:32.223312  777892 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1109 22:07:32.223318  777892 command_runner.go:130] > # stream_port = "0"
	I1109 22:07:32.223329  777892 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1109 22:07:32.223335  777892 command_runner.go:130] > # stream_enable_tls = false
	I1109 22:07:32.223346  777892 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1109 22:07:32.223352  777892 command_runner.go:130] > # stream_idle_timeout = ""
	I1109 22:07:32.223359  777892 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1109 22:07:32.223370  777892 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1109 22:07:32.223376  777892 command_runner.go:130] > # minutes.
	I1109 22:07:32.223382  777892 command_runner.go:130] > # stream_tls_cert = ""
	I1109 22:07:32.223392  777892 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1109 22:07:32.223405  777892 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1109 22:07:32.223411  777892 command_runner.go:130] > # stream_tls_key = ""
	I1109 22:07:32.223423  777892 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1109 22:07:32.223431  777892 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1109 22:07:32.223441  777892 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1109 22:07:32.223446  777892 command_runner.go:130] > # stream_tls_ca = ""
	I1109 22:07:32.223460  777892 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1109 22:07:32.223466  777892 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1109 22:07:32.223475  777892 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1109 22:07:32.223482  777892 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1109 22:07:32.223511  777892 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1109 22:07:32.223524  777892 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1109 22:07:32.223530  777892 command_runner.go:130] > [crio.runtime]
	I1109 22:07:32.223543  777892 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1109 22:07:32.223551  777892 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1109 22:07:32.223556  777892 command_runner.go:130] > # "nofile=1024:2048"
	I1109 22:07:32.223566  777892 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1109 22:07:32.223573  777892 command_runner.go:130] > # default_ulimits = [
	I1109 22:07:32.223579  777892 command_runner.go:130] > # ]
	I1109 22:07:32.223587  777892 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1109 22:07:32.223596  777892 command_runner.go:130] > # no_pivot = false
	I1109 22:07:32.223603  777892 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1109 22:07:32.223615  777892 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1109 22:07:32.223621  777892 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1109 22:07:32.223632  777892 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1109 22:07:32.223638  777892 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1109 22:07:32.223647  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1109 22:07:32.223827  777892 command_runner.go:130] > # conmon = ""
	I1109 22:07:32.223841  777892 command_runner.go:130] > # Cgroup setting for conmon
	I1109 22:07:32.223850  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1109 22:07:32.223856  777892 command_runner.go:130] > conmon_cgroup = "pod"
	I1109 22:07:32.223868  777892 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1109 22:07:32.223879  777892 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1109 22:07:32.223888  777892 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1109 22:07:32.223897  777892 command_runner.go:130] > # conmon_env = [
	I1109 22:07:32.223902  777892 command_runner.go:130] > # ]
	I1109 22:07:32.223909  777892 command_runner.go:130] > # Additional environment variables to set for all the
	I1109 22:07:32.223919  777892 command_runner.go:130] > # containers. These are overridden if set in the
	I1109 22:07:32.223936  777892 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1109 22:07:32.223944  777892 command_runner.go:130] > # default_env = [
	I1109 22:07:32.223948  777892 command_runner.go:130] > # ]
	I1109 22:07:32.223956  777892 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1109 22:07:32.223963  777892 command_runner.go:130] > # selinux = false
	I1109 22:07:32.223971  777892 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1109 22:07:32.223983  777892 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1109 22:07:32.223991  777892 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1109 22:07:32.223999  777892 command_runner.go:130] > # seccomp_profile = ""
	I1109 22:07:32.224006  777892 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1109 22:07:32.224014  777892 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1109 22:07:32.224025  777892 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1109 22:07:32.224033  777892 command_runner.go:130] > # which might increase security.
	I1109 22:07:32.224039  777892 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1109 22:07:32.224049  777892 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1109 22:07:32.224058  777892 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1109 22:07:32.224072  777892 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1109 22:07:32.224080  777892 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1109 22:07:32.224090  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:07:32.224273  777892 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1109 22:07:32.224286  777892 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1109 22:07:32.224292  777892 command_runner.go:130] > # the cgroup blockio controller.
	I1109 22:07:32.224314  777892 command_runner.go:130] > # blockio_config_file = ""
	I1109 22:07:32.224327  777892 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1109 22:07:32.224335  777892 command_runner.go:130] > # irqbalance daemon.
	I1109 22:07:32.224347  777892 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1109 22:07:32.224355  777892 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1109 22:07:32.224365  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:07:32.224370  777892 command_runner.go:130] > # rdt_config_file = ""
	I1109 22:07:32.224377  777892 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1109 22:07:32.224385  777892 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1109 22:07:32.224392  777892 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1109 22:07:32.224400  777892 command_runner.go:130] > # separate_pull_cgroup = ""
	I1109 22:07:32.224408  777892 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1109 22:07:32.224419  777892 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1109 22:07:32.224425  777892 command_runner.go:130] > # will be added.
	I1109 22:07:32.224434  777892 command_runner.go:130] > # default_capabilities = [
	I1109 22:07:32.224439  777892 command_runner.go:130] > # 	"CHOWN",
	I1109 22:07:32.224444  777892 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1109 22:07:32.224453  777892 command_runner.go:130] > # 	"FSETID",
	I1109 22:07:32.224458  777892 command_runner.go:130] > # 	"FOWNER",
	I1109 22:07:32.224462  777892 command_runner.go:130] > # 	"SETGID",
	I1109 22:07:32.224467  777892 command_runner.go:130] > # 	"SETUID",
	I1109 22:07:32.224472  777892 command_runner.go:130] > # 	"SETPCAP",
	I1109 22:07:32.224479  777892 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1109 22:07:32.224672  777892 command_runner.go:130] > # 	"KILL",
	I1109 22:07:32.224682  777892 command_runner.go:130] > # ]
	I1109 22:07:32.224691  777892 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1109 22:07:32.224700  777892 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1109 22:07:32.224708  777892 command_runner.go:130] > # add_inheritable_capabilities = true
	I1109 22:07:32.224716  777892 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1109 22:07:32.224729  777892 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1109 22:07:32.224734  777892 command_runner.go:130] > # default_sysctls = [
	I1109 22:07:32.224743  777892 command_runner.go:130] > # ]
	I1109 22:07:32.224749  777892 command_runner.go:130] > # List of devices on the host that a
	I1109 22:07:32.224757  777892 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1109 22:07:32.224766  777892 command_runner.go:130] > # allowed_devices = [
	I1109 22:07:32.224771  777892 command_runner.go:130] > # 	"/dev/fuse",
	I1109 22:07:32.224775  777892 command_runner.go:130] > # ]
	I1109 22:07:32.224781  777892 command_runner.go:130] > # List of additional devices. specified as
	I1109 22:07:32.224802  777892 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1109 22:07:32.224815  777892 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1109 22:07:32.224823  777892 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1109 22:07:32.224831  777892 command_runner.go:130] > # additional_devices = [
	I1109 22:07:32.224836  777892 command_runner.go:130] > # ]
	I1109 22:07:32.224843  777892 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1109 22:07:32.224851  777892 command_runner.go:130] > # cdi_spec_dirs = [
	I1109 22:07:32.224856  777892 command_runner.go:130] > # 	"/etc/cdi",
	I1109 22:07:32.224861  777892 command_runner.go:130] > # 	"/var/run/cdi",
	I1109 22:07:32.224866  777892 command_runner.go:130] > # ]
	I1109 22:07:32.224876  777892 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1109 22:07:32.224886  777892 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1109 22:07:32.224892  777892 command_runner.go:130] > # Defaults to false.
	I1109 22:07:32.225079  777892 command_runner.go:130] > # device_ownership_from_security_context = false
	I1109 22:07:32.225095  777892 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1109 22:07:32.225103  777892 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1109 22:07:32.225108  777892 command_runner.go:130] > # hooks_dir = [
	I1109 22:07:32.225116  777892 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1109 22:07:32.225123  777892 command_runner.go:130] > # ]
	I1109 22:07:32.225130  777892 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1109 22:07:32.225140  777892 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1109 22:07:32.225147  777892 command_runner.go:130] > # its default mounts from the following two files:
	I1109 22:07:32.225155  777892 command_runner.go:130] > #
	I1109 22:07:32.225163  777892 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1109 22:07:32.225175  777892 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1109 22:07:32.225182  777892 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1109 22:07:32.225190  777892 command_runner.go:130] > #
	I1109 22:07:32.225206  777892 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1109 22:07:32.225216  777892 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1109 22:07:32.225229  777892 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1109 22:07:32.225236  777892 command_runner.go:130] > #      only add mounts it finds in this file.
	I1109 22:07:32.225243  777892 command_runner.go:130] > #
	I1109 22:07:32.225249  777892 command_runner.go:130] > # default_mounts_file = ""
	I1109 22:07:32.225256  777892 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1109 22:07:32.225264  777892 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1109 22:07:32.225272  777892 command_runner.go:130] > # pids_limit = 0
	I1109 22:07:32.225282  777892 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1109 22:07:32.225293  777892 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1109 22:07:32.225301  777892 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1109 22:07:32.225315  777892 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1109 22:07:32.225320  777892 command_runner.go:130] > # log_size_max = -1
	I1109 22:07:32.225334  777892 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1109 22:07:32.225341  777892 command_runner.go:130] > # log_to_journald = false
	I1109 22:07:32.225349  777892 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1109 22:07:32.225357  777892 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1109 22:07:32.225364  777892 command_runner.go:130] > # Path to directory for container attach sockets.
	I1109 22:07:32.225372  777892 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1109 22:07:32.225379  777892 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1109 22:07:32.225388  777892 command_runner.go:130] > # bind_mount_prefix = ""
	I1109 22:07:32.225395  777892 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1109 22:07:32.225404  777892 command_runner.go:130] > # read_only = false
	I1109 22:07:32.225412  777892 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1109 22:07:32.225423  777892 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1109 22:07:32.225429  777892 command_runner.go:130] > # live configuration reload.
	I1109 22:07:32.225439  777892 command_runner.go:130] > # log_level = "info"
	I1109 22:07:32.225446  777892 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1109 22:07:32.225452  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:07:32.225457  777892 command_runner.go:130] > # log_filter = ""
	I1109 22:07:32.225464  777892 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1109 22:07:32.225476  777892 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1109 22:07:32.225482  777892 command_runner.go:130] > # separated by comma.
	I1109 22:07:32.225736  777892 command_runner.go:130] > # uid_mappings = ""
	I1109 22:07:32.225749  777892 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1109 22:07:32.225758  777892 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1109 22:07:32.225763  777892 command_runner.go:130] > # separated by comma.
	I1109 22:07:32.225768  777892 command_runner.go:130] > # gid_mappings = ""
	I1109 22:07:32.225782  777892 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1109 22:07:32.225790  777892 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1109 22:07:32.225801  777892 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1109 22:07:32.225807  777892 command_runner.go:130] > # minimum_mappable_uid = -1
	I1109 22:07:32.225819  777892 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1109 22:07:32.225827  777892 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1109 22:07:32.225835  777892 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1109 22:07:32.225841  777892 command_runner.go:130] > # minimum_mappable_gid = -1
	I1109 22:07:32.225868  777892 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1109 22:07:32.225880  777892 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1109 22:07:32.225888  777892 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1109 22:07:32.225897  777892 command_runner.go:130] > # ctr_stop_timeout = 30
	I1109 22:07:32.225904  777892 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1109 22:07:32.225912  777892 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1109 22:07:32.225918  777892 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1109 22:07:32.225924  777892 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1109 22:07:32.225934  777892 command_runner.go:130] > # drop_infra_ctr = true
	I1109 22:07:32.225942  777892 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1109 22:07:32.225952  777892 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1109 22:07:32.225964  777892 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1109 22:07:32.225973  777892 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1109 22:07:32.225981  777892 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1109 22:07:32.225989  777892 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1109 22:07:32.225994  777892 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1109 22:07:32.226003  777892 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1109 22:07:32.226012  777892 command_runner.go:130] > # pinns_path = ""
	I1109 22:07:32.226020  777892 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1109 22:07:32.226031  777892 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1109 22:07:32.226039  777892 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1109 22:07:32.226284  777892 command_runner.go:130] > # default_runtime = "runc"
	I1109 22:07:32.226301  777892 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1109 22:07:32.226334  777892 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1109 22:07:32.226351  777892 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1109 22:07:32.226362  777892 command_runner.go:130] > # creation as a file is not desired either.
	I1109 22:07:32.226373  777892 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1109 22:07:32.226382  777892 command_runner.go:130] > # the hostname is being managed dynamically.
	I1109 22:07:32.226388  777892 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1109 22:07:32.226393  777892 command_runner.go:130] > # ]
	I1109 22:07:32.226400  777892 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1109 22:07:32.226409  777892 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1109 22:07:32.226423  777892 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1109 22:07:32.226431  777892 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1109 22:07:32.226439  777892 command_runner.go:130] > #
	I1109 22:07:32.226445  777892 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1109 22:07:32.226451  777892 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1109 22:07:32.226460  777892 command_runner.go:130] > #  runtime_type = "oci"
	I1109 22:07:32.226467  777892 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1109 22:07:32.226473  777892 command_runner.go:130] > #  privileged_without_host_devices = false
	I1109 22:07:32.226479  777892 command_runner.go:130] > #  allowed_annotations = []
	I1109 22:07:32.226488  777892 command_runner.go:130] > # Where:
	I1109 22:07:32.226495  777892 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1109 22:07:32.226508  777892 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1109 22:07:32.226516  777892 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1109 22:07:32.226527  777892 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1109 22:07:32.226533  777892 command_runner.go:130] > #   in $PATH.
	I1109 22:07:32.226545  777892 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1109 22:07:32.226551  777892 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1109 22:07:32.226559  777892 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1109 22:07:32.226564  777892 command_runner.go:130] > #   state.
	I1109 22:07:32.226572  777892 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1109 22:07:32.226582  777892 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1109 22:07:32.226590  777892 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1109 22:07:32.226600  777892 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1109 22:07:32.226609  777892 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1109 22:07:32.226621  777892 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1109 22:07:32.226693  777892 command_runner.go:130] > #   The currently recognized values are:
	I1109 22:07:32.226710  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1109 22:07:32.226720  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1109 22:07:32.226728  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1109 22:07:32.226735  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1109 22:07:32.226745  777892 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1109 22:07:32.226757  777892 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1109 22:07:32.226765  777892 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1109 22:07:32.226778  777892 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1109 22:07:32.226785  777892 command_runner.go:130] > #   should be moved to the container's cgroup
	I1109 22:07:32.226794  777892 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1109 22:07:32.226800  777892 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1109 22:07:32.226805  777892 command_runner.go:130] > runtime_type = "oci"
	I1109 22:07:32.226811  777892 command_runner.go:130] > runtime_root = "/run/runc"
	I1109 22:07:32.226816  777892 command_runner.go:130] > runtime_config_path = ""
	I1109 22:07:32.226826  777892 command_runner.go:130] > monitor_path = ""
	I1109 22:07:32.226831  777892 command_runner.go:130] > monitor_cgroup = ""
	I1109 22:07:32.226838  777892 command_runner.go:130] > monitor_exec_cgroup = ""
	I1109 22:07:32.226861  777892 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1109 22:07:32.226871  777892 command_runner.go:130] > # running containers
	I1109 22:07:32.226878  777892 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1109 22:07:32.226886  777892 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1109 22:07:32.226894  777892 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1109 22:07:32.226902  777892 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1109 22:07:32.226912  777892 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1109 22:07:32.226918  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1109 22:07:32.226927  777892 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1109 22:07:32.226933  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1109 22:07:32.226944  777892 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1109 22:07:32.226950  777892 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1109 22:07:32.226959  777892 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1109 22:07:32.226965  777892 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1109 22:07:32.226973  777892 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1109 22:07:32.226986  777892 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1109 22:07:32.226996  777892 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1109 22:07:32.227007  777892 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1109 22:07:32.227019  777892 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1109 22:07:32.227033  777892 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1109 22:07:32.227041  777892 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1109 22:07:32.227050  777892 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1109 22:07:32.227055  777892 command_runner.go:130] > # Example:
	I1109 22:07:32.227065  777892 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1109 22:07:32.227072  777892 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1109 22:07:32.227083  777892 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1109 22:07:32.227089  777892 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1109 22:07:32.227097  777892 command_runner.go:130] > # cpuset = 0
	I1109 22:07:32.227102  777892 command_runner.go:130] > # cpushares = "0-1"
	I1109 22:07:32.227107  777892 command_runner.go:130] > # Where:
	I1109 22:07:32.227116  777892 command_runner.go:130] > # The workload name is workload-type.
	I1109 22:07:32.227125  777892 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1109 22:07:32.227132  777892 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1109 22:07:32.227140  777892 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1109 22:07:32.227161  777892 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1109 22:07:32.227173  777892 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1109 22:07:32.227178  777892 command_runner.go:130] > # 
	I1109 22:07:32.227190  777892 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1109 22:07:32.227195  777892 command_runner.go:130] > #
	I1109 22:07:32.227202  777892 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1109 22:07:32.227210  777892 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1109 22:07:32.227217  777892 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1109 22:07:32.227229  777892 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1109 22:07:32.227237  777892 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1109 22:07:32.227246  777892 command_runner.go:130] > [crio.image]
	I1109 22:07:32.227254  777892 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1109 22:07:32.227263  777892 command_runner.go:130] > # default_transport = "docker://"
	I1109 22:07:32.227271  777892 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1109 22:07:32.227281  777892 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1109 22:07:32.227286  777892 command_runner.go:130] > # global_auth_file = ""
	I1109 22:07:32.227293  777892 command_runner.go:130] > # The image used to instantiate infra containers.
	I1109 22:07:32.227303  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:07:32.227374  777892 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1109 22:07:32.227389  777892 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1109 22:07:32.227396  777892 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1109 22:07:32.227403  777892 command_runner.go:130] > # This option supports live configuration reload.
	I1109 22:07:32.227409  777892 command_runner.go:130] > # pause_image_auth_file = ""
	I1109 22:07:32.227416  777892 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1109 22:07:32.227429  777892 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1109 22:07:32.227437  777892 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1109 22:07:32.227445  777892 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1109 22:07:32.227450  777892 command_runner.go:130] > # pause_command = "/pause"
	I1109 22:07:32.227458  777892 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1109 22:07:32.227465  777892 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1109 22:07:32.227473  777892 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1109 22:07:32.227485  777892 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1109 22:07:32.227492  777892 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1109 22:07:32.227796  777892 command_runner.go:130] > # signature_policy = ""
	I1109 22:07:32.227816  777892 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1109 22:07:32.227826  777892 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1109 22:07:32.227831  777892 command_runner.go:130] > # changing them here.
	I1109 22:07:32.227837  777892 command_runner.go:130] > # insecure_registries = [
	I1109 22:07:32.227843  777892 command_runner.go:130] > # ]
	I1109 22:07:32.227852  777892 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1109 22:07:32.227861  777892 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1109 22:07:32.227867  777892 command_runner.go:130] > # image_volumes = "mkdir"
	I1109 22:07:32.227873  777892 command_runner.go:130] > # Temporary directory to use for storing big files
	I1109 22:07:32.227879  777892 command_runner.go:130] > # big_files_temporary_dir = ""
	I1109 22:07:32.227886  777892 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1109 22:07:32.227893  777892 command_runner.go:130] > # CNI plugins.
	I1109 22:07:32.227898  777892 command_runner.go:130] > [crio.network]
	I1109 22:07:32.227907  777892 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1109 22:07:32.227915  777892 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1109 22:07:32.227922  777892 command_runner.go:130] > # cni_default_network = ""
	I1109 22:07:32.227930  777892 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1109 22:07:32.227936  777892 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1109 22:07:32.227945  777892 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1109 22:07:32.227950  777892 command_runner.go:130] > # plugin_dirs = [
	I1109 22:07:32.227955  777892 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1109 22:07:32.227959  777892 command_runner.go:130] > # ]
	I1109 22:07:32.227966  777892 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1109 22:07:32.227974  777892 command_runner.go:130] > [crio.metrics]
	I1109 22:07:32.227981  777892 command_runner.go:130] > # Globally enable or disable metrics support.
	I1109 22:07:32.227991  777892 command_runner.go:130] > # enable_metrics = false
	I1109 22:07:32.227997  777892 command_runner.go:130] > # Specify enabled metrics collectors.
	I1109 22:07:32.228003  777892 command_runner.go:130] > # Per default all metrics are enabled.
	I1109 22:07:32.228013  777892 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1109 22:07:32.228021  777892 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1109 22:07:32.228032  777892 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1109 22:07:32.228037  777892 command_runner.go:130] > # metrics_collectors = [
	I1109 22:07:32.228042  777892 command_runner.go:130] > # 	"operations",
	I1109 22:07:32.228048  777892 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1109 22:07:32.228054  777892 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1109 22:07:32.228059  777892 command_runner.go:130] > # 	"operations_errors",
	I1109 22:07:32.228067  777892 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1109 22:07:32.228072  777892 command_runner.go:130] > # 	"image_pulls_by_name",
	I1109 22:07:32.228078  777892 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1109 22:07:32.228083  777892 command_runner.go:130] > # 	"image_pulls_failures",
	I1109 22:07:32.228091  777892 command_runner.go:130] > # 	"image_pulls_successes",
	I1109 22:07:32.228096  777892 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1109 22:07:32.228276  777892 command_runner.go:130] > # 	"image_layer_reuse",
	I1109 22:07:32.228290  777892 command_runner.go:130] > # 	"containers_oom_total",
	I1109 22:07:32.228296  777892 command_runner.go:130] > # 	"containers_oom",
	I1109 22:07:32.228301  777892 command_runner.go:130] > # 	"processes_defunct",
	I1109 22:07:32.228309  777892 command_runner.go:130] > # 	"operations_total",
	I1109 22:07:32.228318  777892 command_runner.go:130] > # 	"operations_latency_seconds",
	I1109 22:07:32.228324  777892 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1109 22:07:32.228329  777892 command_runner.go:130] > # 	"operations_errors_total",
	I1109 22:07:32.228337  777892 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1109 22:07:32.228343  777892 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1109 22:07:32.228350  777892 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1109 22:07:32.228356  777892 command_runner.go:130] > # 	"image_pulls_success_total",
	I1109 22:07:32.228361  777892 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1109 22:07:32.228367  777892 command_runner.go:130] > # 	"containers_oom_count_total",
	I1109 22:07:32.228373  777892 command_runner.go:130] > # ]
	I1109 22:07:32.228380  777892 command_runner.go:130] > # The port on which the metrics server will listen.
	I1109 22:07:32.228387  777892 command_runner.go:130] > # metrics_port = 9090
	I1109 22:07:32.228393  777892 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1109 22:07:32.228399  777892 command_runner.go:130] > # metrics_socket = ""
	I1109 22:07:32.228407  777892 command_runner.go:130] > # The certificate for the secure metrics server.
	I1109 22:07:32.228419  777892 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1109 22:07:32.228428  777892 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1109 22:07:32.228437  777892 command_runner.go:130] > # certificate on any modification event.
	I1109 22:07:32.228443  777892 command_runner.go:130] > # metrics_cert = ""
	I1109 22:07:32.228450  777892 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1109 22:07:32.228458  777892 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1109 22:07:32.228465  777892 command_runner.go:130] > # metrics_key = ""
	I1109 22:07:32.228472  777892 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1109 22:07:32.228477  777892 command_runner.go:130] > [crio.tracing]
	I1109 22:07:32.228487  777892 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1109 22:07:32.228492  777892 command_runner.go:130] > # enable_tracing = false
	I1109 22:07:32.228499  777892 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1109 22:07:32.228509  777892 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1109 22:07:32.228515  777892 command_runner.go:130] > # Number of samples to collect per million spans.
	I1109 22:07:32.228521  777892 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1109 22:07:32.228528  777892 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1109 22:07:32.228533  777892 command_runner.go:130] > [crio.stats]
	I1109 22:07:32.228540  777892 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1109 22:07:32.228550  777892 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1109 22:07:32.228555  777892 command_runner.go:130] > # stats_collection_period = 0
	I1109 22:07:32.230307  777892 command_runner.go:130] ! time="2023-11-09 22:07:32.219702963Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1109 22:07:32.230351  777892 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1109 22:07:32.230749  777892 cni.go:84] Creating CNI manager for ""
	I1109 22:07:32.230762  777892 cni.go:136] 2 nodes found, recommending kindnet
	I1109 22:07:32.230773  777892 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 22:07:32.230825  777892 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-833232 NodeName:multinode-833232-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 22:07:32.230954  777892 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-833232-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 22:07:32.231012  777892 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-833232-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 22:07:32.231083  777892 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1109 22:07:32.240371  777892 command_runner.go:130] > kubeadm
	I1109 22:07:32.240483  777892 command_runner.go:130] > kubectl
	I1109 22:07:32.240502  777892 command_runner.go:130] > kubelet
	I1109 22:07:32.241812  777892 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 22:07:32.241879  777892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 22:07:32.252136  777892 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1109 22:07:32.273116  777892 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 22:07:32.295147  777892 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 22:07:32.299692  777892 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 22:07:32.313160  777892 host.go:66] Checking if "multinode-833232" exists ...
	I1109 22:07:32.313423  777892 start.go:304] JoinCluster: &{Name:multinode-833232 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-833232 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 22:07:32.313515  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1109 22:07:32.313566  777892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:07:32.313934  777892 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:07:32.331441  777892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:07:32.504032  777892 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 1l7xyt.b61hcjbxw0bmvzi3 --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 
	I1109 22:07:32.504078  777892 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 22:07:32.504110  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1l7xyt.b61hcjbxw0bmvzi3 --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-833232-m02"
	I1109 22:07:32.547196  777892 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 22:07:32.587035  777892 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1109 22:07:32.587056  777892 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1109 22:07:32.587063  777892 command_runner.go:130] > OS: Linux
	I1109 22:07:32.587069  777892 command_runner.go:130] > CGROUPS_CPU: enabled
	I1109 22:07:32.587076  777892 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1109 22:07:32.587082  777892 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1109 22:07:32.587089  777892 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1109 22:07:32.587095  777892 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1109 22:07:32.587107  777892 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1109 22:07:32.587114  777892 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1109 22:07:32.587122  777892 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1109 22:07:32.587128  777892 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1109 22:07:32.699404  777892 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 22:07:32.699427  777892 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 22:07:32.730915  777892 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 22:07:32.731109  777892 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 22:07:32.731360  777892 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1109 22:07:32.824718  777892 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1109 22:07:37.339235  777892 command_runner.go:130] > This node has joined the cluster:
	I1109 22:07:37.339259  777892 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1109 22:07:37.339267  777892 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1109 22:07:37.339275  777892 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1109 22:07:37.342156  777892 command_runner.go:130] ! W1109 22:07:32.546802    1035 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1109 22:07:37.342182  777892 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1109 22:07:37.342197  777892 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 22:07:37.342214  777892 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1l7xyt.b61hcjbxw0bmvzi3 --discovery-token-ca-cert-hash sha256:bccbad01ee468534c8ab0750a6598e25f4053dc13b80746c4a36c911ea009630 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-833232-m02": (4.838088335s)
	I1109 22:07:37.342231  777892 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1109 22:07:37.570039  777892 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1109 22:07:37.570064  777892 start.go:306] JoinCluster complete in 5.256640968s
	I1109 22:07:37.570075  777892 cni.go:84] Creating CNI manager for ""
	I1109 22:07:37.570081  777892 cni.go:136] 2 nodes found, recommending kindnet
	I1109 22:07:37.570133  777892 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 22:07:37.574901  777892 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1109 22:07:37.574920  777892 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1109 22:07:37.574928  777892 command_runner.go:130] > Device: 36h/54d	Inode: 1827011     Links: 1
	I1109 22:07:37.574936  777892 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 22:07:37.574943  777892 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1109 22:07:37.574949  777892 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1109 22:07:37.574955  777892 command_runner.go:130] > Change: 2023-11-09 21:28:21.758106581 +0000
	I1109 22:07:37.574961  777892 command_runner.go:130] >  Birth: 2023-11-09 21:28:21.718106882 +0000
	I1109 22:07:37.575003  777892 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1109 22:07:37.575010  777892 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1109 22:07:37.596419  777892 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 22:07:37.887780  777892 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1109 22:07:37.892423  777892 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1109 22:07:37.895404  777892 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1109 22:07:37.909645  777892 command_runner.go:130] > daemonset.apps/kindnet configured
	I1109 22:07:37.915039  777892 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:07:37.915299  777892 kapi.go:59] client config for multinode-833232: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 22:07:37.915618  777892 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 22:07:37.915636  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:37.915646  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:37.915653  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:37.918284  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:37.918303  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:37.918327  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:37 GMT
	I1109 22:07:37.918334  777892 round_trippers.go:580]     Audit-Id: 8f0b4088-df79-41e4-973d-4394181b7b02
	I1109 22:07:37.918341  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:37.918347  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:37.918353  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:37.918359  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:37.918366  777892 round_trippers.go:580]     Content-Length: 291
	I1109 22:07:37.918393  777892 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"cd0c9666-0fbb-4844-a49b-1e39c4363b86","resourceVersion":"455","creationTimestamp":"2023-11-09T22:06:33Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1109 22:07:37.918478  777892 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-833232" context rescaled to 1 replicas
	I1109 22:07:37.918510  777892 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 22:07:37.921743  777892 out.go:177] * Verifying Kubernetes components...
	I1109 22:07:37.923828  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:07:37.938096  777892 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:07:37.938438  777892 kapi.go:59] client config for multinode-833232: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.crt", KeyFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/profiles/multinode-833232/client.key", CAFile:"/home/jenkins/minikube-integration/17565-708188/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 22:07:37.938776  777892 node_ready.go:35] waiting up to 6m0s for node "multinode-833232-m02" to be "Ready" ...
	I1109 22:07:37.938874  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:37.938887  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:37.938896  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:37.938913  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:37.941530  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:37.941550  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:37.941558  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:37.941565  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:37.941571  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:37 GMT
	I1109 22:07:37.941577  777892 round_trippers.go:580]     Audit-Id: 480dcf9d-c181-4757-b30f-c32fa04c1522
	I1109 22:07:37.941583  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:37.941590  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:37.941976  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:37.942538  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:37.942555  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:37.942564  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:37.942587  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:37.945037  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:37.945058  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:37.945067  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:37.945073  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:37.945080  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:37.945086  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:37 GMT
	I1109 22:07:37.945095  777892 round_trippers.go:580]     Audit-Id: 0dee5d5f-aadb-42b4-b705-ebe143262a53
	I1109 22:07:37.945109  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:37.945324  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:38.446404  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:38.446427  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:38.446437  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:38.446444  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:38.448866  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:38.448929  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:38.448951  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:38.448973  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:38.449009  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:38.449035  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:38.449057  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:38 GMT
	I1109 22:07:38.449075  777892 round_trippers.go:580]     Audit-Id: db3ae07a-c6a5-4b06-8ea8-f7f160d35e4d
	I1109 22:07:38.449184  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:38.946692  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:38.946720  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:38.946732  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:38.946740  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:38.949262  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:38.949280  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:38.949289  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:38.949295  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:38 GMT
	I1109 22:07:38.949302  777892 round_trippers.go:580]     Audit-Id: 63c4af0e-373e-4c53-bf83-a530ab776b31
	I1109 22:07:38.949308  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:38.949314  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:38.949320  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:38.949431  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:39.445893  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:39.445917  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:39.445928  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:39.445935  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:39.448568  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:39.448607  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:39.448617  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:39.448629  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:39.448636  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:39.448643  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:39 GMT
	I1109 22:07:39.448653  777892 round_trippers.go:580]     Audit-Id: 9d09653c-b51c-44f9-bb4e-7f97291aa87e
	I1109 22:07:39.448659  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:39.448818  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:39.946456  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:39.946484  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:39.946496  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:39.946503  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:39.949092  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:39.949120  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:39.949130  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:39.949137  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:39.949144  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:39.949153  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:39.949161  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:39 GMT
	I1109 22:07:39.949167  777892 round_trippers.go:580]     Audit-Id: efd036c8-7f7b-4332-87a0-8c9886d4ba82
	I1109 22:07:39.949262  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:39.949671  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:40.446365  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:40.446385  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:40.446395  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:40.446403  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:40.448767  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:40.448791  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:40.448800  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:40.448806  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:40.448813  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:40.448820  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:40.448829  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:40 GMT
	I1109 22:07:40.448836  777892 round_trippers.go:580]     Audit-Id: 1d3082ef-deef-4fa9-a74a-fa1cdea9eaa3
	I1109 22:07:40.449105  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:40.946059  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:40.946081  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:40.946092  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:40.946100  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:40.948599  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:40.948630  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:40.948639  777892 round_trippers.go:580]     Audit-Id: ca5e8b61-4e7c-454f-827c-893a3533d03e
	I1109 22:07:40.948646  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:40.948652  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:40.948658  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:40.948664  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:40.948673  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:40 GMT
	I1109 22:07:40.948808  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"492","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volu [truncated 5183 chars]
	I1109 22:07:41.445869  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:41.445892  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:41.445903  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:41.445910  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:41.448304  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:41.448328  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:41.448336  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:41.448342  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:41.448349  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:41.448363  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:41 GMT
	I1109 22:07:41.448373  777892 round_trippers.go:580]     Audit-Id: 5cc74ccb-fbea-4745-bb14-f021ab58d1a9
	I1109 22:07:41.448380  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:41.448527  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:41.946596  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:41.946615  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:41.946625  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:41.946632  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:41.949296  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:41.949318  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:41.949327  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:41.949333  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:41.949340  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:41 GMT
	I1109 22:07:41.949346  777892 round_trippers.go:580]     Audit-Id: 60c89ca1-0671-480f-a351-e4068b383b42
	I1109 22:07:41.949352  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:41.949358  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:41.949471  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:41.949839  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:42.446629  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:42.446656  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:42.446666  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:42.446674  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:42.449075  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:42.449097  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:42.449105  777892 round_trippers.go:580]     Audit-Id: 7b0ce8e1-328e-45b8-b8bc-96aab9eb65ba
	I1109 22:07:42.449112  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:42.449119  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:42.449126  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:42.449132  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:42.449140  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:42 GMT
	I1109 22:07:42.449254  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:42.946261  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:42.946289  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:42.946299  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:42.946306  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:42.948919  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:42.948951  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:42.948961  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:42.948967  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:42.948976  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:42 GMT
	I1109 22:07:42.948982  777892 round_trippers.go:580]     Audit-Id: 353d9348-da8f-436b-8fee-37731cbcab31
	I1109 22:07:42.948989  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:42.948995  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:42.949212  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:43.445890  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:43.445915  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:43.445925  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:43.445933  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:43.448420  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:43.448438  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:43.448447  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:43.448453  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:43.448459  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:43 GMT
	I1109 22:07:43.448465  777892 round_trippers.go:580]     Audit-Id: 439256cd-6226-49df-aae0-a151dd6bb7a6
	I1109 22:07:43.448471  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:43.448477  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:43.448591  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:43.946761  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:43.946796  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:43.946807  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:43.946814  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:43.949268  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:43.949292  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:43.949302  777892 round_trippers.go:580]     Audit-Id: 7354948e-8c85-4b21-bdab-b8e38abb45a5
	I1109 22:07:43.949308  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:43.949315  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:43.949325  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:43.949335  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:43.949342  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:43 GMT
	I1109 22:07:43.949623  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:43.949995  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:44.445874  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:44.445899  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:44.445911  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:44.445919  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:44.448504  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:44.448528  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:44.448536  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:44 GMT
	I1109 22:07:44.448543  777892 round_trippers.go:580]     Audit-Id: 77be155c-2ff0-4664-a2f2-21c92deae35e
	I1109 22:07:44.448549  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:44.448556  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:44.448562  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:44.448568  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:44.449036  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:44.946739  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:44.946772  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:44.946783  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:44.946796  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:44.949378  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:44.949404  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:44.949415  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:44.949422  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:44.949428  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:44.949435  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:44 GMT
	I1109 22:07:44.949442  777892 round_trippers.go:580]     Audit-Id: 25a38e5c-6be1-408f-8498-8bbdf63f5384
	I1109 22:07:44.949451  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:44.949754  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:45.446417  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:45.446441  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:45.446451  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:45.446458  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:45.448861  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:45.448881  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:45.448889  777892 round_trippers.go:580]     Audit-Id: 47e2cb53-854c-4389-bf57-31af5c7f26b9
	I1109 22:07:45.448896  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:45.448902  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:45.448909  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:45.448915  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:45.448921  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:45 GMT
	I1109 22:07:45.449077  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:45.946708  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:45.946733  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:45.946743  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:45.946751  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:45.949206  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:45.949230  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:45.949239  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:45.949246  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:45 GMT
	I1109 22:07:45.949252  777892 round_trippers.go:580]     Audit-Id: 5b0aa128-412b-48a3-91f1-7d51b786f7c0
	I1109 22:07:45.949259  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:45.949265  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:45.949272  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:45.949377  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:46.446421  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:46.446444  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:46.446454  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:46.446462  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:46.448918  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:46.448936  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:46.448944  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:46.448951  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:46.448958  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:46.448965  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:46.448971  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:46 GMT
	I1109 22:07:46.448980  777892 round_trippers.go:580]     Audit-Id: 76641916-f0fa-466e-b480-a5a533d99230
	I1109 22:07:46.449232  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:46.449605  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:46.946619  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:46.946648  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:46.946658  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:46.946665  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:46.949190  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:46.949210  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:46.949218  777892 round_trippers.go:580]     Audit-Id: f4f29733-fcce-49db-91ea-65e649e624d7
	I1109 22:07:46.949225  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:46.949231  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:46.949237  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:46.949244  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:46.949250  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:46 GMT
	I1109 22:07:46.949378  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"508","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5292 chars]
	I1109 22:07:47.446183  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:47.446206  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:47.446216  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:47.446225  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:47.449177  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:47.449198  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:47.449206  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:47 GMT
	I1109 22:07:47.449213  777892 round_trippers.go:580]     Audit-Id: 265ec171-76a1-493d-bfd4-e63b69aa3088
	I1109 22:07:47.449219  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:47.449226  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:47.449232  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:47.449238  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:47.449360  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:47.946484  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:47.946515  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:47.946525  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:47.946533  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:47.949051  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:47.949082  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:47.949091  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:47.949097  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:47.949103  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:47.949110  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:47.949117  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:47 GMT
	I1109 22:07:47.949123  777892 round_trippers.go:580]     Audit-Id: fb66925b-e328-4e6a-be44-7e60d0790df9
	I1109 22:07:47.949226  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:48.445879  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:48.445900  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:48.445911  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:48.445918  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:48.448365  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:48.448383  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:48.448391  777892 round_trippers.go:580]     Audit-Id: bc851370-0b01-41ff-9aa0-de5838e8f2f1
	I1109 22:07:48.448398  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:48.448404  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:48.448411  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:48.448417  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:48.448424  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:48 GMT
	I1109 22:07:48.448559  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:48.946327  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:48.946351  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:48.946361  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:48.946368  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:48.948892  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:48.948912  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:48.948921  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:48.948927  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:48.948934  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:48.948940  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:48.948946  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:48 GMT
	I1109 22:07:48.948952  777892 round_trippers.go:580]     Audit-Id: 9fae6b83-525a-4ba1-8c54-d9e9bf89cc0b
	I1109 22:07:48.949040  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:48.949420  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:49.446743  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:49.446764  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:49.446774  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:49.446782  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:49.449174  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:49.449196  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:49.449205  777892 round_trippers.go:580]     Audit-Id: 2657c6d8-55fd-4ad8-a45b-8f955d91781e
	I1109 22:07:49.449212  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:49.449219  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:49.449225  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:49.449238  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:49.449245  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:49 GMT
	I1109 22:07:49.449543  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:49.945958  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:49.945980  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:49.945990  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:49.945998  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:49.948514  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:49.948538  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:49.948546  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:49.948552  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:49.948558  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:49.948574  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:49.948582  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:49 GMT
	I1109 22:07:49.948588  777892 round_trippers.go:580]     Audit-Id: 28322362-036d-4068-a9e5-7b787bc48b5a
	I1109 22:07:49.948912  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:50.446479  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:50.446504  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:50.446514  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:50.446523  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:50.449033  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:50.449053  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:50.449062  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:50.449071  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:50.449078  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:50.449089  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:50 GMT
	I1109 22:07:50.449096  777892 round_trippers.go:580]     Audit-Id: 407d75b6-a6c8-4b70-bfeb-e30a89fb9026
	I1109 22:07:50.449109  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:50.449419  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:50.945950  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:50.945975  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:50.945985  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:50.945993  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:50.948370  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:50.948401  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:50.948410  777892 round_trippers.go:580]     Audit-Id: 9c76f9ab-6760-45ac-91db-ee2288a91d7c
	I1109 22:07:50.948416  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:50.948423  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:50.948429  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:50.948438  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:50.948444  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:50 GMT
	I1109 22:07:50.948754  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:51.446847  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:51.446869  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:51.446878  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:51.446886  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:51.449226  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:51.449250  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:51.449258  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:51.449265  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:51.449271  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:51.449278  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:51.449290  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:51 GMT
	I1109 22:07:51.449297  777892 round_trippers.go:580]     Audit-Id: 4b49930a-de68-4a77-90af-6cd846e22967
	I1109 22:07:51.449537  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:51.449919  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:51.946757  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:51.946779  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:51.946789  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:51.946797  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:51.949116  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:51.949136  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:51.949145  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:51.949151  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:51.949158  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:51 GMT
	I1109 22:07:51.949165  777892 round_trippers.go:580]     Audit-Id: 17e098d3-f387-423b-8cd0-df62e36fca5e
	I1109 22:07:51.949176  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:51.949183  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:51.949489  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:52.446058  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:52.446081  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:52.446091  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:52.446098  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:52.449733  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:07:52.449757  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:52.449766  777892 round_trippers.go:580]     Audit-Id: 9aa66d27-52a6-480f-9b29-b1e168dd0c11
	I1109 22:07:52.449774  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:52.449781  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:52.449808  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:52.449822  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:52.449828  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:52 GMT
	I1109 22:07:52.449943  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:52.946347  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:52.946374  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:52.946384  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:52.946392  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:52.948916  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:52.948935  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:52.948943  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:52.948949  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:52.948956  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:52.948962  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:52.948968  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:52 GMT
	I1109 22:07:52.948974  777892 round_trippers.go:580]     Audit-Id: 9dea0bed-a09e-4522-b1d0-51cc415e91ff
	I1109 22:07:52.950338  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:53.445879  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:53.445906  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:53.445919  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:53.445926  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:53.448506  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:53.448526  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:53.448534  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:53.448541  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:53 GMT
	I1109 22:07:53.448547  777892 round_trippers.go:580]     Audit-Id: 90fe0c15-22df-44ef-af1a-56464d36bb28
	I1109 22:07:53.448553  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:53.448559  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:53.448565  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:53.448687  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:53.946520  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:53.946542  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:53.946552  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:53.946560  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:53.949128  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:53.949187  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:53.949204  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:53.949212  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:53.949222  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:53.949230  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:53 GMT
	I1109 22:07:53.949238  777892 round_trippers.go:580]     Audit-Id: 19518b47-15f0-41f5-81b1-68de9482a8cf
	I1109 22:07:53.949246  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:53.949344  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:53.949744  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:54.446418  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:54.446440  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:54.446450  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:54.446458  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:54.448745  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:54.448769  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:54.448778  777892 round_trippers.go:580]     Audit-Id: b8e93862-ccbb-428f-8398-d33cb2b42276
	I1109 22:07:54.448784  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:54.448790  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:54.448796  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:54.448802  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:54.448809  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:54 GMT
	I1109 22:07:54.448926  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:54.945897  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:54.945921  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:54.945930  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:54.945938  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:54.948317  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:54.948339  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:54.948349  777892 round_trippers.go:580]     Audit-Id: 6425d981-e2c8-43b0-8d8e-7818de8122a4
	I1109 22:07:54.948356  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:54.948363  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:54.948371  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:54.948384  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:54.948390  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:54 GMT
	I1109 22:07:54.948647  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:55.445884  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:55.445909  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:55.445920  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:55.445927  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:55.448305  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:55.448327  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:55.448335  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:55.448342  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:55.448348  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:55.448355  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:55.448362  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:55 GMT
	I1109 22:07:55.448372  777892 round_trippers.go:580]     Audit-Id: 94c1150c-7f7c-4dcb-a38f-d255aec23e0c
	I1109 22:07:55.448690  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:55.946040  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:55.946067  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:55.946077  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:55.946084  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:55.948618  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:55.948639  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:55.948647  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:55.948654  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:55.948660  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:55.948666  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:55 GMT
	I1109 22:07:55.948672  777892 round_trippers.go:580]     Audit-Id: de492bc5-63bf-42ab-99af-1383125331e7
	I1109 22:07:55.948678  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:55.948785  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:56.445861  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:56.445883  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:56.445894  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:56.445901  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:56.448366  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:56.448392  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:56.448400  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:56.448407  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:56.448413  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:56.448419  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:56.448426  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:56 GMT
	I1109 22:07:56.448437  777892 round_trippers.go:580]     Audit-Id: cd79cea4-3366-4a82-85f5-2eded474bf4b
	I1109 22:07:56.448534  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:56.448993  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:56.945907  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:56.945930  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:56.945941  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:56.945949  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:56.948475  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:56.948498  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:56.948508  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:56.948515  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:56 GMT
	I1109 22:07:56.948521  777892 round_trippers.go:580]     Audit-Id: 548fd580-3b7c-4ab2-9052-403abe4bd831
	I1109 22:07:56.948527  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:56.948534  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:56.948540  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:56.948640  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:57.446175  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:57.446196  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:57.446206  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:57.446214  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:57.448695  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:57.448715  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:57.448723  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:57 GMT
	I1109 22:07:57.448730  777892 round_trippers.go:580]     Audit-Id: fd6411c0-7dd3-4190-ba21-434bd6b737c6
	I1109 22:07:57.448736  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:57.448742  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:57.448748  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:57.448754  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:57.448855  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:57.946335  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:57.946356  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:57.946366  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:57.946373  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:57.948935  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:57.948959  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:57.948967  777892 round_trippers.go:580]     Audit-Id: fc9981fc-59c8-49ff-8a5c-24925744b58e
	I1109 22:07:57.948974  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:57.948980  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:57.948986  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:57.948992  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:57.948998  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:57 GMT
	I1109 22:07:57.949323  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:58.445905  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:58.445929  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:58.445939  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:58.445947  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:58.448343  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:58.448363  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:58.448371  777892 round_trippers.go:580]     Audit-Id: 0cd6dc3d-8ecf-4d7b-a974-336135a92e0d
	I1109 22:07:58.448378  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:58.448384  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:58.448390  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:58.448396  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:58.448402  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:58 GMT
	I1109 22:07:58.448517  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:58.946622  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:58.946648  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:58.946658  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:58.946665  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:58.956763  777892 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1109 22:07:58.956783  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:58.956792  777892 round_trippers.go:580]     Audit-Id: 94482b83-4d4b-4b8f-9321-ed55cbafc11f
	I1109 22:07:58.956798  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:58.956804  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:58.956810  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:58.956816  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:58.956822  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:58 GMT
	I1109 22:07:58.957430  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:58.957822  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:07:59.445920  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:59.445941  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:59.445950  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:59.445958  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:59.448473  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:59.448495  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:59.448504  777892 round_trippers.go:580]     Audit-Id: 0acc6df2-99fd-4e98-b9f1-a915023b66f2
	I1109 22:07:59.448512  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:59.448518  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:59.448524  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:59.448534  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:59.448541  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:59 GMT
	I1109 22:07:59.448834  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:07:59.945897  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:07:59.945920  777892 round_trippers.go:469] Request Headers:
	I1109 22:07:59.945931  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:07:59.945939  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:07:59.948407  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:07:59.948427  777892 round_trippers.go:577] Response Headers:
	I1109 22:07:59.948436  777892 round_trippers.go:580]     Audit-Id: 1c851ba8-78cf-49e6-bd09-396acf27a198
	I1109 22:07:59.948442  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:07:59.948449  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:07:59.948455  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:07:59.948461  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:07:59.948467  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:07:59 GMT
	I1109 22:07:59.948583  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:00.446694  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:00.446719  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:00.446729  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:00.446737  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:00.449232  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:00.449251  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:00.449259  777892 round_trippers.go:580]     Audit-Id: 77ad4196-3782-4b8d-9968-3c5a20869528
	I1109 22:08:00.449265  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:00.449271  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:00.449277  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:00.449284  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:00.449290  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:00 GMT
	I1109 22:08:00.449431  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:00.946181  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:00.946207  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:00.946217  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:00.946225  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:00.948640  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:00.948664  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:00.948673  777892 round_trippers.go:580]     Audit-Id: 2976e236-a781-4c12-9543-e883dc406a11
	I1109 22:08:00.948679  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:00.948685  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:00.948692  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:00.948698  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:00.948705  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:00 GMT
	I1109 22:08:00.948814  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:01.445907  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:01.445934  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:01.445945  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:01.445953  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:01.448595  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:01.448619  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:01.448627  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:01.448634  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:01.448640  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:01.448646  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:01.448653  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:01 GMT
	I1109 22:08:01.448659  777892 round_trippers.go:580]     Audit-Id: 5fc1e49c-5efd-465e-934e-1102ebbf51a9
	I1109 22:08:01.448974  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:01.449365  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:01.946109  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:01.946135  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:01.946145  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:01.946153  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:01.948715  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:01.948741  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:01.948750  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:01.948757  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:01.948763  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:01.948769  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:01 GMT
	I1109 22:08:01.948776  777892 round_trippers.go:580]     Audit-Id: fb5b174d-b7b9-41f8-9df9-27404b69d09b
	I1109 22:08:01.948787  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:01.948880  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:02.445892  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:02.445914  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:02.445924  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:02.445932  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:02.448294  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:02.448323  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:02.448331  777892 round_trippers.go:580]     Audit-Id: 262d55ef-c7d9-416d-9ebc-62be0ed0d11d
	I1109 22:08:02.448338  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:02.448344  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:02.448350  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:02.448360  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:02.448366  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:02 GMT
	I1109 22:08:02.448478  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:02.946180  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:02.946203  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:02.946212  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:02.946221  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:02.948616  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:02.948638  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:02.948647  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:02.948654  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:02 GMT
	I1109 22:08:02.948661  777892 round_trippers.go:580]     Audit-Id: b8e1f30b-31ab-41ca-8b22-ee72a2dff9b8
	I1109 22:08:02.948667  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:02.948673  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:02.948679  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:02.948775  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:03.446447  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:03.446470  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:03.446480  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:03.446487  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:03.448760  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:03.448778  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:03.448786  777892 round_trippers.go:580]     Audit-Id: 00c7f4b8-307c-45f2-b821-4124c5b31098
	I1109 22:08:03.448793  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:03.448799  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:03.448805  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:03.448811  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:03.448817  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:03 GMT
	I1109 22:08:03.448943  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:03.945894  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:03.945919  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:03.945929  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:03.945937  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:03.948482  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:03.948509  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:03.948519  777892 round_trippers.go:580]     Audit-Id: 5e9ef566-729d-4f0d-8229-68b8cdb36bb4
	I1109 22:08:03.948525  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:03.948536  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:03.948545  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:03.948559  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:03.948567  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:03 GMT
	I1109 22:08:03.948839  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:03.949219  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:04.446516  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:04.446540  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:04.446550  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:04.446558  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:04.448986  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:04.449007  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:04.449016  777892 round_trippers.go:580]     Audit-Id: 266eb32c-bc1a-498d-ba9f-b2537b303d9f
	I1109 22:08:04.449023  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:04.449030  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:04.449036  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:04.449043  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:04.449052  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:04 GMT
	I1109 22:08:04.449202  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:04.945893  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:04.945918  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:04.945928  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:04.945936  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:04.948344  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:04.948374  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:04.948383  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:04.948390  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:04 GMT
	I1109 22:08:04.948396  777892 round_trippers.go:580]     Audit-Id: 215f065b-ee68-4797-a702-e8d1460013c2
	I1109 22:08:04.948402  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:04.948408  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:04.948414  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:04.948519  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:05.445901  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:05.445924  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:05.445934  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:05.445942  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:05.448480  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:05.448499  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:05.448508  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:05.448514  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:05.448520  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:05.448526  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:05.448532  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:05 GMT
	I1109 22:08:05.448538  777892 round_trippers.go:580]     Audit-Id: bde34c11-399c-4e7b-9e81-d4521034ddd7
	I1109 22:08:05.448696  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:05.946800  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:05.946821  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:05.946830  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:05.946837  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:05.949353  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:05.949378  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:05.949388  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:05 GMT
	I1109 22:08:05.949394  777892 round_trippers.go:580]     Audit-Id: 26eac2ca-18d6-4b10-a8e3-2aba3a797369
	I1109 22:08:05.949401  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:05.949407  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:05.949417  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:05.949432  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:05.949684  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:05.950068  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:06.445922  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:06.445947  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:06.445957  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:06.445966  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:06.448454  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:06.448478  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:06.448487  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:06.448494  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:06 GMT
	I1109 22:08:06.448500  777892 round_trippers.go:580]     Audit-Id: f813d736-4a4f-4e63-8dff-862e5b0b70da
	I1109 22:08:06.448508  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:06.448516  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:06.448527  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:06.448818  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:06.946216  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:06.946239  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:06.946249  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:06.946256  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:06.948659  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:06.948682  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:06.948694  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:06.948701  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:06 GMT
	I1109 22:08:06.948708  777892 round_trippers.go:580]     Audit-Id: 74a43e04-da62-4907-89c5-71aa710d4e15
	I1109 22:08:06.948714  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:06.948723  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:06.948735  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:06.949006  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:07.446349  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:07.446373  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:07.446383  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:07.446391  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:07.448820  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:07.448840  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:07.448849  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:07 GMT
	I1109 22:08:07.448864  777892 round_trippers.go:580]     Audit-Id: f1cb4215-a4cf-4ace-8c0e-94c802921693
	I1109 22:08:07.448871  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:07.448877  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:07.448884  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:07.448890  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:07.449025  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:07.946061  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:07.946083  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:07.946093  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:07.946101  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:07.948637  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:07.948663  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:07.948672  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:07.948679  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:07 GMT
	I1109 22:08:07.948686  777892 round_trippers.go:580]     Audit-Id: 0e08f9f1-b865-4338-ace0-229e654f41ef
	I1109 22:08:07.948692  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:07.948698  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:07.948704  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:07.948807  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:08.445885  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:08.445909  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:08.445919  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:08.445927  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:08.448362  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:08.448387  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:08.448397  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:08 GMT
	I1109 22:08:08.448404  777892 round_trippers.go:580]     Audit-Id: 19b24ccf-7ca8-4241-acac-31e1dfa1414a
	I1109 22:08:08.448411  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:08.448417  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:08.448424  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:08.448430  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:08.448546  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:08.448934  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:08.946403  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:08.946427  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:08.946437  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:08.946445  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:08.948905  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:08.948927  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:08.948937  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:08.948944  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:08.948951  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:08.948958  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:08 GMT
	I1109 22:08:08.948964  777892 round_trippers.go:580]     Audit-Id: 638eaade-7445-4b13-8007-c1e0381c20ec
	I1109 22:08:08.948980  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:08.949105  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:09.446789  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:09.446813  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:09.446823  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:09.446830  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:09.449271  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:09.449291  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:09.449300  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:09.449307  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:09.449313  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:09 GMT
	I1109 22:08:09.449319  777892 round_trippers.go:580]     Audit-Id: 97311be0-5e3b-45a2-bc59-3dd968c9f594
	I1109 22:08:09.449325  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:09.449331  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:09.449454  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:09.946430  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:09.946457  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:09.946467  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:09.946475  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:09.948942  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:09.948967  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:09.948976  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:09.948983  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:09.948989  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:09.948995  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:09.949001  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:09 GMT
	I1109 22:08:09.949008  777892 round_trippers.go:580]     Audit-Id: 9c689e97-1acf-4e9c-8f48-70ac3c9d653a
	I1109 22:08:09.949164  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:10.445910  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:10.445933  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:10.445944  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:10.445952  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:10.448394  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:10.448419  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:10.448427  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:10.448434  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:10 GMT
	I1109 22:08:10.448440  777892 round_trippers.go:580]     Audit-Id: 4ae8b579-dd15-4936-ad87-96623efef5fb
	I1109 22:08:10.448446  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:10.448452  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:10.448458  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:10.448652  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:10.449031  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:10.946762  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:10.946785  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:10.946796  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:10.946803  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:10.949254  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:10.949279  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:10.949287  777892 round_trippers.go:580]     Audit-Id: 91350664-5294-4473-b3cb-3911a68dbd87
	I1109 22:08:10.949294  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:10.949300  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:10.949306  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:10.949313  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:10.949319  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:10 GMT
	I1109 22:08:10.949610  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:11.446210  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:11.446234  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:11.446245  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:11.446253  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:11.448851  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:11.448877  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:11.448888  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:11.448895  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:11.448901  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:11.448908  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:11.448915  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:11 GMT
	I1109 22:08:11.448922  777892 round_trippers.go:580]     Audit-Id: 06c583eb-d845-45f1-aa41-bf85c74eec6c
	I1109 22:08:11.449235  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:11.945932  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:11.945961  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:11.945972  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:11.945979  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:11.948770  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:11.948793  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:11.948802  777892 round_trippers.go:580]     Audit-Id: 3567c180-ef6f-4155-b1da-a4bb957907bc
	I1109 22:08:11.948809  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:11.948815  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:11.948821  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:11.948827  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:11.948835  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:11 GMT
	I1109 22:08:11.949148  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:12.445883  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:12.445905  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:12.445915  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:12.445922  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:12.448285  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:12.448305  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:12.448313  777892 round_trippers.go:580]     Audit-Id: e3b6dd20-af22-43cc-b67f-ce79e9ed976f
	I1109 22:08:12.448320  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:12.448326  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:12.448332  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:12.448338  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:12.448348  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:12 GMT
	I1109 22:08:12.448591  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:12.946749  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:12.946774  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:12.946784  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:12.946791  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:12.949358  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:12.949383  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:12.949393  777892 round_trippers.go:580]     Audit-Id: dde79e70-5711-4a18-bff1-8436a150675e
	I1109 22:08:12.949400  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:12.949406  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:12.949413  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:12.949424  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:12.949430  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:12 GMT
	I1109 22:08:12.949640  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:12.950033  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:13.445995  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:13.446021  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:13.446031  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:13.446038  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:13.448562  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:13.448594  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:13.448604  777892 round_trippers.go:580]     Audit-Id: 0ffa6a56-d21d-4eb0-ba9f-a3575770ee3c
	I1109 22:08:13.448611  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:13.448617  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:13.448623  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:13.448629  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:13.448636  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:13 GMT
	I1109 22:08:13.448742  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:13.946724  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:13.946746  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:13.946756  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:13.946763  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:13.949281  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:13.949306  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:13.949314  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:13.949321  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:13.949327  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:13 GMT
	I1109 22:08:13.949333  777892 round_trippers.go:580]     Audit-Id: e60287f2-ebf0-4b05-a256-e61da8c67024
	I1109 22:08:13.949339  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:13.949345  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:13.949443  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:14.446568  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:14.446590  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:14.446602  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:14.446610  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:14.449181  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:14.449200  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:14.449209  777892 round_trippers.go:580]     Audit-Id: 8f83723b-574c-4f39-aac9-978907c11042
	I1109 22:08:14.449215  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:14.449221  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:14.449227  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:14.449233  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:14.449239  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:14 GMT
	I1109 22:08:14.449360  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:14.945895  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:14.945919  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:14.945930  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:14.945937  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:14.948341  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:14.948372  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:14.948382  777892 round_trippers.go:580]     Audit-Id: af1eb8c4-add1-4d86-8234-326b0f83b515
	I1109 22:08:14.948389  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:14.948395  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:14.948401  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:14.948409  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:14.948416  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:14 GMT
	I1109 22:08:14.948564  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:15.446710  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:15.446732  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:15.446743  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:15.446751  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:15.449266  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:15.449288  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:15.449296  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:15.449303  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:15 GMT
	I1109 22:08:15.449310  777892 round_trippers.go:580]     Audit-Id: ab618b2b-c30a-43d2-be03-e4190286b726
	I1109 22:08:15.449316  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:15.449322  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:15.449331  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:15.449607  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:15.449992  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:15.945953  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:15.945997  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:15.946008  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:15.946015  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:15.948483  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:15.948503  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:15.948511  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:15.948518  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:15.948524  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:15.948531  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:15.948537  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:15 GMT
	I1109 22:08:15.948543  777892 round_trippers.go:580]     Audit-Id: 60dc1ade-4b45-4963-94dc-3d37d7a298c3
	I1109 22:08:15.948736  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:16.446820  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:16.446844  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:16.446854  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:16.446861  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:16.449382  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:16.449403  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:16.449411  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:16.449417  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:16.449424  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:16 GMT
	I1109 22:08:16.449430  777892 round_trippers.go:580]     Audit-Id: 39391fb2-a5c5-4d0a-a0b4-caebb5010187
	I1109 22:08:16.449440  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:16.449448  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:16.449779  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:16.946635  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:16.946660  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:16.946670  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:16.946678  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:16.949114  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:16.949137  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:16.949145  777892 round_trippers.go:580]     Audit-Id: 5bca19f4-40b1-482b-b2f3-30768419ed08
	I1109 22:08:16.949152  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:16.949158  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:16.949165  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:16.949173  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:16.949181  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:16 GMT
	I1109 22:08:16.949409  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:17.446272  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:17.446294  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:17.446303  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:17.446310  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:17.448748  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:17.448768  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:17.448777  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:17 GMT
	I1109 22:08:17.448784  777892 round_trippers.go:580]     Audit-Id: 2a2eb7cb-59bf-40e7-8c5d-e24d378d2a16
	I1109 22:08:17.448790  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:17.448796  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:17.448802  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:17.448808  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:17.449390  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:17.946605  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:17.946632  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:17.946643  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:17.946650  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:17.949126  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:17.949146  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:17.949155  777892 round_trippers.go:580]     Audit-Id: f6bd89ba-546b-43e3-accf-2df9d67ac438
	I1109 22:08:17.949161  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:17.949167  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:17.949173  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:17.949180  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:17.949196  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:17 GMT
	I1109 22:08:17.949300  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:17.949666  777892 node_ready.go:58] node "multinode-833232-m02" has status "Ready":"False"
	I1109 22:08:18.445994  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:18.446013  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:18.446023  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:18.446031  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:18.448283  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:18.448310  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:18.448319  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:18.448326  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:18.448332  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:18.448339  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:18 GMT
	I1109 22:08:18.448350  777892 round_trippers.go:580]     Audit-Id: 2b3b8f04-7e9d-4db3-a6ea-ccffaac387d9
	I1109 22:08:18.448356  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:18.448473  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:18.946607  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:18.946631  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:18.946641  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:18.946649  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:18.949152  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:18.949175  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:18.949184  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:18.949190  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:18.949196  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:18.949202  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:18.949213  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:18 GMT
	I1109 22:08:18.949222  777892 round_trippers.go:580]     Audit-Id: 7ff62b96-3a99-4208-8b6e-fca3eeb7a3e1
	I1109 22:08:18.949331  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:19.446047  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:19.446068  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:19.446078  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:19.446086  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:19.448510  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:19.448529  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:19.448537  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:19 GMT
	I1109 22:08:19.448544  777892 round_trippers.go:580]     Audit-Id: 9ccef1eb-3e89-4bfa-b978-30ee82728bbb
	I1109 22:08:19.448550  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:19.448556  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:19.448563  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:19.448569  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:19.448702  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:19.946291  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:19.946335  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:19.946346  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:19.946353  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:19.948880  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:19.948904  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:19.948913  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:19.948920  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:19.948927  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:19 GMT
	I1109 22:08:19.948933  777892 round_trippers.go:580]     Audit-Id: f6e01caa-394a-472f-99d9-70d96602979e
	I1109 22:08:19.948939  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:19.948946  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:19.949030  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"516","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5561 chars]
	I1109 22:08:20.446638  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:20.446664  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.446674  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.446681  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.449151  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.449171  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.449180  777892 round_trippers.go:580]     Audit-Id: d0aa64b9-c69b-4113-83c7-e9bc2f3f816c
	I1109 22:08:20.449187  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.449195  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.449201  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.449207  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.449214  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.449324  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"559","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1109 22:08:20.449745  777892 node_ready.go:49] node "multinode-833232-m02" has status "Ready":"True"
	I1109 22:08:20.449769  777892 node_ready.go:38] duration metric: took 42.51097495s waiting for node "multinode-833232-m02" to be "Ready" ...
	I1109 22:08:20.449782  777892 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 22:08:20.449849  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1109 22:08:20.449862  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.449871  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.449880  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.453743  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:08:20.453807  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.453829  777892 round_trippers.go:580]     Audit-Id: 35ffa690-d4fc-4942-88f7-cc79b8f9b2b6
	I1109 22:08:20.453852  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.453878  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.453888  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.453908  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.453931  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.454834  777892 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"559"},"items":[{"metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"451","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1109 22:08:20.457838  777892 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.457928  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-kr4mg
	I1109 22:08:20.457942  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.457951  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.457959  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.460331  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.460357  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.460366  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.460373  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.460379  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.460386  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.460396  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.460402  777892 round_trippers.go:580]     Audit-Id: 25cc1a86-adad-4114-a37f-450ca84b952e
	I1109 22:08:20.460684  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-kr4mg","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4","resourceVersion":"451","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"a815ca60-c295-445b-9580-e7335cdfb476","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a815ca60-c295-445b-9580-e7335cdfb476\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1109 22:08:20.461192  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:20.461206  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.461215  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.461224  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.463508  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.463527  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.463535  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.463541  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.463548  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.463557  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.463563  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.463576  777892 round_trippers.go:580]     Audit-Id: a7ba08e7-def3-4090-be17-73621db1d454
	I1109 22:08:20.463863  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:20.464240  777892 pod_ready.go:92] pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:20.464257  777892 pod_ready.go:81] duration metric: took 6.391248ms waiting for pod "coredns-5dd5756b68-kr4mg" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.464267  777892 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.464368  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-833232
	I1109 22:08:20.464379  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.464388  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.464405  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.466693  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.466712  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.466732  777892 round_trippers.go:580]     Audit-Id: 6849ef3d-d9ab-44c6-a8a8-611e9ffbefa2
	I1109 22:08:20.466738  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.466746  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.466755  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.466764  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.466770  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.466860  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-833232","namespace":"kube-system","uid":"1b3a5828-6fa1-43ef-9fe5-0bd827bc607c","resourceVersion":"422","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"bcb0d7444037668b0544684a5f617409","kubernetes.io/config.mirror":"bcb0d7444037668b0544684a5f617409","kubernetes.io/config.seen":"2023-11-09T22:06:33.633002538Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1109 22:08:20.467303  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:20.467317  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.467324  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.467331  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.469456  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.469477  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.469485  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.469492  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.469498  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.469508  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.469519  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.469525  777892 round_trippers.go:580]     Audit-Id: a3a7d256-faac-46c6-b9d1-4decc53f4059
	I1109 22:08:20.469794  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:20.470176  777892 pod_ready.go:92] pod "etcd-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:20.470194  777892 pod_ready.go:81] duration metric: took 5.91715ms waiting for pod "etcd-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.470212  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.470270  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-833232
	I1109 22:08:20.470281  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.470289  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.470296  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.472573  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.472627  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.472656  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.472677  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.472709  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.472736  777892 round_trippers.go:580]     Audit-Id: 429da305-7eff-4a7c-91ea-c1008b1e297d
	I1109 22:08:20.472758  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.472779  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.472932  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-833232","namespace":"kube-system","uid":"ac0a37a2-9eb3-4caa-9e04-eb883448846a","resourceVersion":"423","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"ece6c8a9968fab733b8b5674f1e0f3b3","kubernetes.io/config.mirror":"ece6c8a9968fab733b8b5674f1e0f3b3","kubernetes.io/config.seen":"2023-11-09T22:06:33.632994809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1109 22:08:20.473491  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:20.473508  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.473516  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.473524  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.475712  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.475761  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.475785  777892 round_trippers.go:580]     Audit-Id: 9476e551-60d6-4af0-a27f-d9259ff860a5
	I1109 22:08:20.475793  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.475799  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.475809  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.475821  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.475829  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.475924  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:20.476315  777892 pod_ready.go:92] pod "kube-apiserver-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:20.476333  777892 pod_ready.go:81] duration metric: took 6.108846ms waiting for pod "kube-apiserver-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.476344  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.476402  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-833232
	I1109 22:08:20.476412  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.476420  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.476426  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.478729  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.478752  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.478761  777892 round_trippers.go:580]     Audit-Id: 213b19b2-8e1a-417f-b569-882babe8cb23
	I1109 22:08:20.478767  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.478778  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.478787  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.478799  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.478805  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.479001  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-833232","namespace":"kube-system","uid":"c145c0c9-2759-4085-8766-b69466b0ae80","resourceVersion":"424","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"85621c7f3e0293e83befbe0eda8a3b19","kubernetes.io/config.mirror":"85621c7f3e0293e83befbe0eda8a3b19","kubernetes.io/config.seen":"2023-11-09T22:06:25.611885873Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1109 22:08:20.479515  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:20.479543  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.479552  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.479559  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.481834  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.481858  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.481867  777892 round_trippers.go:580]     Audit-Id: 4914c34a-8dfc-4866-a6c5-ebef0758f1df
	I1109 22:08:20.481874  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.481880  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.481887  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.481893  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.481900  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.482223  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:20.482640  777892 pod_ready.go:92] pod "kube-controller-manager-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:20.482658  777892 pod_ready.go:81] duration metric: took 6.306867ms waiting for pod "kube-controller-manager-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.482670  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5wpvb" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.647030  777892 request.go:629] Waited for 164.295382ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wpvb
	I1109 22:08:20.647106  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5wpvb
	I1109 22:08:20.647153  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.647191  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.647218  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.649735  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.649758  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.649767  777892 round_trippers.go:580]     Audit-Id: 78c6d0cb-242f-4dc0-83ee-8d616b9450fb
	I1109 22:08:20.649775  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.649781  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.649787  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.649795  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.649801  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.650065  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5wpvb","generateName":"kube-proxy-","namespace":"kube-system","uid":"21518b68-07bb-4838-b396-432262d69868","resourceVersion":"528","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b65e1464-d3a2-48a3-b16f-bf49038c0975","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b65e1464-d3a2-48a3-b16f-bf49038c0975\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1109 22:08:20.846825  777892 request.go:629] Waited for 196.24609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:20.846949  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232-m02
	I1109 22:08:20.846963  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:20.846973  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:20.846981  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:20.849481  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:20.849505  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:20.849514  777892 round_trippers.go:580]     Audit-Id: 96f81ec4-a53b-4e64-a683-9c6eef1cda25
	I1109 22:08:20.849520  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:20.849527  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:20.849533  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:20.849539  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:20.849546  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:20 GMT
	I1109 22:08:20.849771  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232-m02","uid":"19575e5e-6f10-4cbb-8ccd-6b23a58e7f8c","resourceVersion":"559","creationTimestamp":"2023-11-09T22:07:36Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:07:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 5378 chars]
	I1109 22:08:20.850163  777892 pod_ready.go:92] pod "kube-proxy-5wpvb" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:20.850181  777892 pod_ready.go:81] duration metric: took 367.499404ms waiting for pod "kube-proxy-5wpvb" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:20.850193  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgbc8" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:21.047516  777892 request.go:629] Waited for 197.237075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgbc8
	I1109 22:08:21.047613  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgbc8
	I1109 22:08:21.047625  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:21.047635  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:21.047643  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:21.050411  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:21.050435  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:21.050445  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:21.050452  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:21 GMT
	I1109 22:08:21.050458  777892 round_trippers.go:580]     Audit-Id: 94947f96-56c3-446c-b242-92736173d424
	I1109 22:08:21.050487  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:21.050500  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:21.050507  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:21.050632  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"51c0aad4-80b1-47a7-9a64-07cef5c5b95f","resourceVersion":"418","creationTimestamp":"2023-11-09T22:06:46Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b65e1464-d3a2-48a3-b16f-bf49038c0975","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b65e1464-d3a2-48a3-b16f-bf49038c0975\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1109 22:08:21.247454  777892 request.go:629] Waited for 196.306168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:21.247534  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:21.247544  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:21.247553  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:21.247564  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:21.250027  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:21.250097  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:21.250133  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:21.250158  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:21.250175  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:21.250182  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:21.250188  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:21 GMT
	I1109 22:08:21.250208  777892 round_trippers.go:580]     Audit-Id: cf5b9056-086e-49fc-916f-a1d7e6ad8e30
	I1109 22:08:21.250364  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:21.250803  777892 pod_ready.go:92] pod "kube-proxy-jgbc8" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:21.250821  777892 pod_ready.go:81] duration metric: took 400.618302ms waiting for pod "kube-proxy-jgbc8" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:21.250833  777892 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:21.447230  777892 request.go:629] Waited for 196.330504ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-833232
	I1109 22:08:21.447346  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-833232
	I1109 22:08:21.447377  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:21.447392  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:21.447400  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:21.450147  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:21.450204  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:21.450216  777892 round_trippers.go:580]     Audit-Id: 07631e8d-14d4-499a-a9a5-56db2fbe125b
	I1109 22:08:21.450223  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:21.450234  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:21.450241  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:21.450259  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:21.450270  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:21 GMT
	I1109 22:08:21.450380  777892 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-833232","namespace":"kube-system","uid":"2c24f114-7915-434c-a183-7dfd0695543e","resourceVersion":"425","creationTimestamp":"2023-11-09T22:06:33Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9df1ddff0806f6f72d247e55c05e117c","kubernetes.io/config.mirror":"9df1ddff0806f6f72d247e55c05e117c","kubernetes.io/config.seen":"2023-11-09T22:06:33.633001357Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-09T22:06:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1109 22:08:21.647070  777892 request.go:629] Waited for 196.248765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:21.647178  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-833232
	I1109 22:08:21.647188  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:21.647205  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:21.647213  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:21.650248  777892 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 22:08:21.650287  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:21.650295  777892 round_trippers.go:580]     Audit-Id: 0ab6d30e-cc1e-49ed-8435-762e1f21fdaa
	I1109 22:08:21.650302  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:21.650308  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:21.650334  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:21.650341  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:21.650350  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:21 GMT
	I1109 22:08:21.650451  777892 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-09T22:06:30Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1109 22:08:21.650880  777892 pod_ready.go:92] pod "kube-scheduler-multinode-833232" in "kube-system" namespace has status "Ready":"True"
	I1109 22:08:21.650899  777892 pod_ready.go:81] duration metric: took 400.05563ms waiting for pod "kube-scheduler-multinode-833232" in "kube-system" namespace to be "Ready" ...
	I1109 22:08:21.650911  777892 pod_ready.go:38] duration metric: took 1.201113503s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 22:08:21.650923  777892 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 22:08:21.650981  777892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:08:21.665450  777892 system_svc.go:56] duration metric: took 14.518272ms WaitForService to wait for kubelet.
	I1109 22:08:21.665475  777892 kubeadm.go:581] duration metric: took 43.746937466s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 22:08:21.665495  777892 node_conditions.go:102] verifying NodePressure condition ...
	I1109 22:08:21.847019  777892 request.go:629] Waited for 181.446737ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1109 22:08:21.847102  777892 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1109 22:08:21.847131  777892 round_trippers.go:469] Request Headers:
	I1109 22:08:21.847151  777892 round_trippers.go:473]     Accept: application/json, */*
	I1109 22:08:21.847173  777892 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1109 22:08:21.849715  777892 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 22:08:21.849743  777892 round_trippers.go:577] Response Headers:
	I1109 22:08:21.849752  777892 round_trippers.go:580]     Audit-Id: 7d1fb4f0-213e-427c-9b8b-ddf64c08082e
	I1109 22:08:21.849758  777892 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 22:08:21.849766  777892 round_trippers.go:580]     Content-Type: application/json
	I1109 22:08:21.849776  777892 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 436546c0-bffe-43bf-b4cf-bfb237ffad31
	I1109 22:08:21.849789  777892 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e7a33f29-37ac-4726-a064-0fa8e38d79de
	I1109 22:08:21.849795  777892 round_trippers.go:580]     Date: Thu, 09 Nov 2023 22:08:21 GMT
	I1109 22:08:21.849978  777892 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"562"},"items":[{"metadata":{"name":"multinode-833232","uid":"81f7703a-728b-4b40-9379-5b80b23bab0c","resourceVersion":"435","creationTimestamp":"2023-11-09T22:06:30Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-833232","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ab3333ccf4df2ea5ea1199c82f7295530890595b","minikube.k8s.io/name":"multinode-833232","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_09T22_06_34_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12332 chars]
	I1109 22:08:21.850631  777892 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 22:08:21.850654  777892 node_conditions.go:123] node cpu capacity is 2
	I1109 22:08:21.850664  777892 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 22:08:21.850670  777892 node_conditions.go:123] node cpu capacity is 2
	I1109 22:08:21.850675  777892 node_conditions.go:105] duration metric: took 185.175519ms to run NodePressure ...
	I1109 22:08:21.850686  777892 start.go:228] waiting for startup goroutines ...
	I1109 22:08:21.850716  777892 start.go:242] writing updated cluster config ...
	I1109 22:08:21.851017  777892 ssh_runner.go:195] Run: rm -f paused
	I1109 22:08:21.909061  777892 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1109 22:08:21.912420  777892 out.go:177] * Done! kubectl is now configured to use "multinode-833232" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 09 22:07:19 multinode-833232 crio[897]: time="2023-11-09 22:07:19.292952934Z" level=info msg="Starting container: 2750d355636b31e836e7bfbe6dc2fc70014736e995da902883a31e37ec9b4466" id=74000204-f78c-4ed5-81e4-70dfcb75616a name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 22:07:19 multinode-833232 crio[897]: time="2023-11-09 22:07:19.301153993Z" level=info msg="Created container d6c78ec17b78f484bcca8440b477a2396a14cb15e745930f8e807997fdb9c336: kube-system/coredns-5dd5756b68-kr4mg/coredns" id=60a8006e-3f19-4195-85bf-da2d86024698 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 22:07:19 multinode-833232 crio[897]: time="2023-11-09 22:07:19.301942567Z" level=info msg="Starting container: d6c78ec17b78f484bcca8440b477a2396a14cb15e745930f8e807997fdb9c336" id=481bf67d-cf0c-4066-8315-f06dc524c59e name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 22:07:19 multinode-833232 crio[897]: time="2023-11-09 22:07:19.309086828Z" level=info msg="Started container" PID=1939 containerID=2750d355636b31e836e7bfbe6dc2fc70014736e995da902883a31e37ec9b4466 description=kube-system/storage-provisioner/storage-provisioner id=74000204-f78c-4ed5-81e4-70dfcb75616a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a1e098a46dd060b91379d75c95102e33c869dd0fe58daa77e0cfe6348e9809e
	Nov 09 22:07:19 multinode-833232 crio[897]: time="2023-11-09 22:07:19.323165332Z" level=info msg="Started container" PID=1948 containerID=d6c78ec17b78f484bcca8440b477a2396a14cb15e745930f8e807997fdb9c336 description=kube-system/coredns-5dd5756b68-kr4mg/coredns id=481bf67d-cf0c-4066-8315-f06dc524c59e name=/runtime.v1.RuntimeService/StartContainer sandboxID=404fbe9b47e6e84fd7d024d005c18bc64c3ba4389799a78b0d458c9c3b6b7ceb
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.124411358Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-76fbj/POD" id=aea74162-cdb6-4eec-b5f1-ab308c6b3862 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.124468195Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.145580322Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-76fbj Namespace:default ID:888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549 UID:fc53fd0a-55d8-487f-a6d0-20b548114f5d NetNS:/var/run/netns/ad5ca015-236d-4bbf-8d8b-b3605aee0360 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.145758266Z" level=info msg="Adding pod default_busybox-5bc68d56bd-76fbj to CNI network \"kindnet\" (type=ptp)"
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.158906215Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-76fbj Namespace:default ID:888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549 UID:fc53fd0a-55d8-487f-a6d0-20b548114f5d NetNS:/var/run/netns/ad5ca015-236d-4bbf-8d8b-b3605aee0360 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.159270915Z" level=info msg="Checking pod default_busybox-5bc68d56bd-76fbj for CNI network kindnet (type=ptp)"
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.169840623Z" level=info msg="Ran pod sandbox 888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549 with infra container: default/busybox-5bc68d56bd-76fbj/POD" id=aea74162-cdb6-4eec-b5f1-ab308c6b3862 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.171764846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=47ca8445-b136-473c-956e-a20e4299b7df name=/runtime.v1.ImageService/ImageStatus
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.172051449Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=47ca8445-b136-473c-956e-a20e4299b7df name=/runtime.v1.ImageService/ImageStatus
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.173143594Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=9dc0d336-2045-4338-8851-e418a43e912d name=/runtime.v1.ImageService/PullImage
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.174588066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 09 22:08:23 multinode-833232 crio[897]: time="2023-11-09 22:08:23.811998068Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 09 22:08:24 multinode-833232 crio[897]: time="2023-11-09 22:08:24.959947982Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=9dc0d336-2045-4338-8851-e418a43e912d name=/runtime.v1.ImageService/PullImage
	Nov 09 22:08:24 multinode-833232 crio[897]: time="2023-11-09 22:08:24.961073309Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=f1d87b8e-644e-424f-ad15-e9550fa5c169 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 22:08:24 multinode-833232 crio[897]: time="2023-11-09 22:08:24.961704304Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f1d87b8e-644e-424f-ad15-e9550fa5c169 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 22:08:24 multinode-833232 crio[897]: time="2023-11-09 22:08:24.962522121Z" level=info msg="Creating container: default/busybox-5bc68d56bd-76fbj/busybox" id=b569879c-68f0-439a-821c-bc146312ee8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 22:08:24 multinode-833232 crio[897]: time="2023-11-09 22:08:24.962616011Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 09 22:08:25 multinode-833232 crio[897]: time="2023-11-09 22:08:25.034345353Z" level=info msg="Created container eb90cfed0ce9381294ac76c52a8aeb39deb54c48d263078b65cef65f2f96c4a6: default/busybox-5bc68d56bd-76fbj/busybox" id=b569879c-68f0-439a-821c-bc146312ee8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 22:08:25 multinode-833232 crio[897]: time="2023-11-09 22:08:25.035221220Z" level=info msg="Starting container: eb90cfed0ce9381294ac76c52a8aeb39deb54c48d263078b65cef65f2f96c4a6" id=a53c5254-e8c8-446e-88cf-fcca2713e106 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 22:08:25 multinode-833232 crio[897]: time="2023-11-09 22:08:25.045577728Z" level=info msg="Started container" PID=2096 containerID=eb90cfed0ce9381294ac76c52a8aeb39deb54c48d263078b65cef65f2f96c4a6 description=default/busybox-5bc68d56bd-76fbj/busybox id=a53c5254-e8c8-446e-88cf-fcca2713e106 name=/runtime.v1.RuntimeService/StartContainer sandboxID=888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	eb90cfed0ce93       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   888506acd20e6       busybox-5bc68d56bd-76fbj
	d6c78ec17b78f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   404fbe9b47e6e       coredns-5dd5756b68-kr4mg
	2750d355636b3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   4a1e098a46dd0       storage-provisioner
	b3e164ee09772       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                      About a minute ago   Running             kube-proxy                0                   376e43d5d5f61       kube-proxy-jgbc8
	e2ab954a26c01       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   85d94d2205211       kindnet-vdwtv
	5ba7f8692382c       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                      2 minutes ago        Running             kube-apiserver            0                   77eeb4da37335       kube-apiserver-multinode-833232
	88d1595ad2186       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                      2 minutes ago        Running             kube-scheduler            0                   65c27cfc0a522       kube-scheduler-multinode-833232
	73a2dba3447a8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      2 minutes ago        Running             etcd                      0                   80b14b5a5682f       etcd-multinode-833232
	4f175336b4742       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                      2 minutes ago        Running             kube-controller-manager   0                   ff2cf5aa90131       kube-controller-manager-multinode-833232
	
	* 
	* ==> coredns [d6c78ec17b78f484bcca8440b477a2396a14cb15e745930f8e807997fdb9c336] <==
	* [INFO] 10.244.0.3:59744 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000103639s
	[INFO] 10.244.1.2:38022 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101021s
	[INFO] 10.244.1.2:47947 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001086993s
	[INFO] 10.244.1.2:48430 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000086375s
	[INFO] 10.244.1.2:36138 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119852s
	[INFO] 10.244.1.2:38674 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000864397s
	[INFO] 10.244.1.2:50855 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062318s
	[INFO] 10.244.1.2:56638 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00008201s
	[INFO] 10.244.1.2:34080 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071212s
	[INFO] 10.244.0.3:59280 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105238s
	[INFO] 10.244.0.3:49825 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000078883s
	[INFO] 10.244.0.3:57672 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072311s
	[INFO] 10.244.0.3:48848 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072647s
	[INFO] 10.244.1.2:46672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133833s
	[INFO] 10.244.1.2:41858 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001146s
	[INFO] 10.244.1.2:42581 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080779s
	[INFO] 10.244.1.2:34145 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070916s
	[INFO] 10.244.0.3:55698 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000089452s
	[INFO] 10.244.0.3:46830 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000110251s
	[INFO] 10.244.0.3:46409 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087023s
	[INFO] 10.244.0.3:41000 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082051s
	[INFO] 10.244.1.2:36889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000092791s
	[INFO] 10.244.1.2:33215 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100036s
	[INFO] 10.244.1.2:56046 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000059987s
	[INFO] 10.244.1.2:37931 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000098035s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-833232
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-833232
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab3333ccf4df2ea5ea1199c82f7295530890595b
	                    minikube.k8s.io/name=multinode-833232
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_09T22_06_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 22:06:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833232
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 22:08:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 22:07:18 +0000   Thu, 09 Nov 2023 22:06:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 22:07:18 +0000   Thu, 09 Nov 2023 22:06:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 22:07:18 +0000   Thu, 09 Nov 2023 22:06:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 22:07:18 +0000   Thu, 09 Nov 2023 22:07:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-833232
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 9788055349b8491ba12bc5367c18273b
	  System UUID:                9d203095-05a2-4b02-a3d0-67b7618ccee2
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-76fbj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-kr4mg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     104s
	  kube-system                 etcd-multinode-833232                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-vdwtv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      104s
	  kube-system                 kube-apiserver-multinode-833232             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-controller-manager-multinode-833232    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 kube-proxy-jgbc8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-multinode-833232             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         117s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node multinode-833232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node multinode-833232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node multinode-833232 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node multinode-833232 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node multinode-833232 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node multinode-833232 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-833232 event: Registered Node multinode-833232 in Controller
	  Normal  NodeReady                72s                  kubelet          Node multinode-833232 status is now: NodeReady
	
	
	Name:               multinode-833232-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-833232-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 09 Nov 2023 22:07:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-833232-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 09 Nov 2023 22:08:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 09 Nov 2023 22:08:20 +0000   Thu, 09 Nov 2023 22:07:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 09 Nov 2023 22:08:20 +0000   Thu, 09 Nov 2023 22:07:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 09 Nov 2023 22:08:20 +0000   Thu, 09 Nov 2023 22:07:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 09 Nov 2023 22:08:20 +0000   Thu, 09 Nov 2023 22:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-833232-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f0d2e453b1940f9a9ef524a9efee11a
	  System UUID:                1b855871-08df-45a2-a79d-824f8edcdb2c
	  Boot ID:                    c6805f31-bd75-4a7d-9a37-90ff74c38794
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-zwn9f    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-vnm4j               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      54s
	  kube-system                 kube-proxy-5wpvb            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  NodeHasSufficientMemory  54s (x5 over 55s)  kubelet          Node multinode-833232-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x5 over 55s)  kubelet          Node multinode-833232-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x5 over 55s)  kubelet          Node multinode-833232-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node multinode-833232-m02 event: Registered Node multinode-833232-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-833232-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001047] FS-Cache: O-key=[8] '04613b0000000000'
	[  +0.000705] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000009519ed76
	[  +0.001234] FS-Cache: N-key=[8] '04613b0000000000'
	[  +1.883823] FS-Cache: Duplicate cookie detected
	[  +0.000701] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.000973] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000005eb91895
	[  +0.001121] FS-Cache: O-key=[8] '03613b0000000000'
	[  +0.000715] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=00000000afe277c2
	[  +0.001058] FS-Cache: N-key=[8] '03613b0000000000'
	[  +0.314346] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000a6326e35{9p.inode} n=000000000067384c
	[  +0.001081] FS-Cache: O-key=[8] '09613b0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000943] FS-Cache: N-cookie d=00000000a6326e35{9p.inode} n=000000004e0bd103
	[  +0.001050] FS-Cache: N-key=[8] '09613b0000000000'
	[  +3.214848] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000049 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=000000004b6c5454{9P.session} n=0000000040db7851
	[  +0.001155] FS-Cache: O-key=[10] '34323938393639353234'
	[  +0.000778] FS-Cache: N-cookie c=0000004a [p=00000002 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=000000004b6c5454{9P.session} n=00000000aa25bbf1
	[  +0.001089] FS-Cache: N-key=[10] '34323938393639353234'
	
	* 
	* ==> etcd [73a2dba3447a8adb4294ca576a01ea623e6fb208150338187e52c308d0b6c517] <==
	* {"level":"info","ts":"2023-11-09T22:06:26.507052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-09T22:06:26.507169Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-09T22:06:26.515005Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-09T22:06:26.515129Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-09T22:06:26.515213Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-09T22:06:26.515535Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-09T22:06:26.515649Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-09T22:06:27.374301Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-09T22:06:27.374464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-09T22:06:27.374512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-09T22:06:27.374576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-09T22:06:27.374619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-09T22:06:27.374655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-09T22:06:27.3747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-09T22:06:27.377641Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T22:06:27.382557Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-833232 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-09T22:06:27.382745Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T22:06:27.383557Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T22:06:27.383673Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T22:06:27.383732Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-09T22:06:27.383802Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-09T22:06:27.384889Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-09T22:06:27.38658Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-09T22:06:27.386648Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-09T22:06:27.396574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  22:08:30 up  4:51,  0 users,  load average: 0.42, 0.99, 1.00
	Linux multinode-833232 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [e2ab954a26c01429b798befe3c1c42b6c498604890e9a9795fe89949ba779550] <==
	* I1109 22:07:38.246538       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:07:38.246566       1 main.go:227] handling current node
	I1109 22:07:38.246577       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:07:38.246582       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	I1109 22:07:38.246761       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1109 22:07:48.251565       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:07:48.251592       1 main.go:227] handling current node
	I1109 22:07:48.251602       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:07:48.251608       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	I1109 22:07:58.264665       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:07:58.264692       1 main.go:227] handling current node
	I1109 22:07:58.264703       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:07:58.264709       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	I1109 22:08:08.277215       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:08:08.277242       1 main.go:227] handling current node
	I1109 22:08:08.277253       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:08:08.277259       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	I1109 22:08:18.290080       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:08:18.290110       1 main.go:227] handling current node
	I1109 22:08:18.290120       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:08:18.290126       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	I1109 22:08:28.295260       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1109 22:08:28.295288       1 main.go:227] handling current node
	I1109 22:08:28.295299       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1109 22:08:28.295309       1 main.go:250] Node multinode-833232-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [5ba7f8692382c39266c99e281835c4c67f4e4306e5cf4bc670545bd6a298a3ff] <==
	* I1109 22:06:30.293629       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 22:06:30.293709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 22:06:30.296168       1 shared_informer.go:318] Caches are synced for configmaps
	I1109 22:06:30.301511       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 22:06:30.301545       1 aggregator.go:166] initial CRD sync complete...
	I1109 22:06:30.301554       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 22:06:30.301559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 22:06:30.301565       1 cache.go:39] Caches are synced for autoregister controller
	I1109 22:06:30.315673       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 22:06:31.100160       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 22:06:31.104584       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 22:06:31.104606       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 22:06:31.611524       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 22:06:31.660885       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 22:06:31.812108       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 22:06:31.819541       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1109 22:06:31.820598       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 22:06:31.825127       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 22:06:32.179526       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 22:06:33.542428       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 22:06:33.554930       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 22:06:33.570618       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1109 22:06:46.483819       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1109 22:06:46.932470       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1109 22:08:25.962059       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400b4297a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b065db0), ResponseWriter:(*httpsnoop.rw)(0x400b065db0), Flusher:(*httpsnoop.rw)(0x400b065db0), CloseNotifier:(*httpsnoop.rw)(0x400b065db0), Pusher:(*httpsnoop.rw)(0x400b065db0)}}, encoder:(*versioning.codec)(0x400b436640), memAllocator:(*runtime.Allocator)(0x400b400fd8)})
	
	* 
	* ==> kube-controller-manager [4f175336b47427a3d60d645126e37ff590799cc4426da30b698047f84423879b] <==
	* I1109 22:06:47.403693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="148.15µs"
	I1109 22:07:18.831037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.142µs"
	I1109 22:07:18.847282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.802µs"
	I1109 22:07:19.811084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.560996ms"
	I1109 22:07:19.811817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.51µs"
	I1109 22:07:20.931174       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1109 22:07:36.923615       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-833232-m02\" does not exist"
	I1109 22:07:36.969212       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vnm4j"
	I1109 22:07:36.979724       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5wpvb"
	I1109 22:07:36.980256       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-833232-m02" podCIDRs=["10.244.1.0/24"]
	I1109 22:07:40.934206       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-833232-m02"
	I1109 22:07:40.934288       1 event.go:307] "Event occurred" object="multinode-833232-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-833232-m02 event: Registered Node multinode-833232-m02 in Controller"
	I1109 22:08:20.168190       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-833232-m02"
	I1109 22:08:22.764568       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1109 22:08:22.786207       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-zwn9f"
	I1109 22:08:22.810153       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-76fbj"
	I1109 22:08:22.819839       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.929756ms"
	I1109 22:08:22.846841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.42677ms"
	I1109 22:08:22.847018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="72.484µs"
	I1109 22:08:22.848438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.351µs"
	I1109 22:08:22.856993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.4µs"
	I1109 22:08:25.585846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.174552ms"
	I1109 22:08:25.586000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.914µs"
	I1109 22:08:25.934623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.146812ms"
	I1109 22:08:25.935342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.107µs"
	
	* 
	* ==> kube-proxy [b3e164ee097721f4ddb8f3c105bb16a7a6c11a0cfdeb7c7b1e394c6aea2439fd] <==
	* I1109 22:06:48.057198       1 server_others.go:69] "Using iptables proxy"
	I1109 22:06:48.144706       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1109 22:06:48.198078       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 22:06:48.200828       1 server_others.go:152] "Using iptables Proxier"
	I1109 22:06:48.200930       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 22:06:48.200962       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 22:06:48.201062       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 22:06:48.201312       1 server.go:846] "Version info" version="v1.28.3"
	I1109 22:06:48.201487       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 22:06:48.202260       1 config.go:188] "Starting service config controller"
	I1109 22:06:48.202398       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 22:06:48.202448       1 config.go:97] "Starting endpoint slice config controller"
	I1109 22:06:48.202477       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 22:06:48.203028       1 config.go:315] "Starting node config controller"
	I1109 22:06:48.204709       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 22:06:48.303705       1 shared_informer.go:318] Caches are synced for service config
	I1109 22:06:48.303766       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 22:06:48.305081       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [88d1595ad2186d1e82bf4683d959ff9d909936b38169f8ad3acbcac7d36dc135] <==
	* W1109 22:06:30.507735       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1109 22:06:30.507788       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1109 22:06:30.507900       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1109 22:06:30.507950       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1109 22:06:30.508067       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 22:06:30.508116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1109 22:06:30.508783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1109 22:06:30.508861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1109 22:06:30.508958       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1109 22:06:30.509010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1109 22:06:30.509210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 22:06:30.509269       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1109 22:06:30.509421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1109 22:06:30.509462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1109 22:06:30.509836       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 22:06:30.511742       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1109 22:06:30.509891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1109 22:06:30.511909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1109 22:06:30.509945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 22:06:30.512000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1109 22:06:30.509985       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 22:06:30.512077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1109 22:06:31.312917       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1109 22:06:31.313040       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1109 22:06:31.891282       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149465    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b34c0ee0-70b5-485d-8116-5a79eb0c520f-lib-modules\") pod \"kindnet-vdwtv\" (UID: \"b34c0ee0-70b5-485d-8116-5a79eb0c520f\") " pod="kube-system/kindnet-vdwtv"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149570    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjrtt\" (UniqueName: \"kubernetes.io/projected/51c0aad4-80b1-47a7-9a64-07cef5c5b95f-kube-api-access-mjrtt\") pod \"kube-proxy-jgbc8\" (UID: \"51c0aad4-80b1-47a7-9a64-07cef5c5b95f\") " pod="kube-system/kube-proxy-jgbc8"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149608    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51c0aad4-80b1-47a7-9a64-07cef5c5b95f-xtables-lock\") pod \"kube-proxy-jgbc8\" (UID: \"51c0aad4-80b1-47a7-9a64-07cef5c5b95f\") " pod="kube-system/kube-proxy-jgbc8"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149633    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b34c0ee0-70b5-485d-8116-5a79eb0c520f-xtables-lock\") pod \"kindnet-vdwtv\" (UID: \"b34c0ee0-70b5-485d-8116-5a79eb0c520f\") " pod="kube-system/kindnet-vdwtv"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149658    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51c0aad4-80b1-47a7-9a64-07cef5c5b95f-lib-modules\") pod \"kube-proxy-jgbc8\" (UID: \"51c0aad4-80b1-47a7-9a64-07cef5c5b95f\") " pod="kube-system/kube-proxy-jgbc8"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149690    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b34c0ee0-70b5-485d-8116-5a79eb0c520f-cni-cfg\") pod \"kindnet-vdwtv\" (UID: \"b34c0ee0-70b5-485d-8116-5a79eb0c520f\") " pod="kube-system/kindnet-vdwtv"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149714    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk6cv\" (UniqueName: \"kubernetes.io/projected/b34c0ee0-70b5-485d-8116-5a79eb0c520f-kube-api-access-fk6cv\") pod \"kindnet-vdwtv\" (UID: \"b34c0ee0-70b5-485d-8116-5a79eb0c520f\") " pod="kube-system/kindnet-vdwtv"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: I1109 22:06:47.149739    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51c0aad4-80b1-47a7-9a64-07cef5c5b95f-kube-proxy\") pod \"kube-proxy-jgbc8\" (UID: \"51c0aad4-80b1-47a7-9a64-07cef5c5b95f\") " pod="kube-system/kube-proxy-jgbc8"
	Nov 09 22:06:47 multinode-833232 kubelet[1385]: W1109 22:06:47.616205    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio-85d94d22052116f48a58507d2bc6c29e967034b4ce14dbd14aaf830d898585a7 WatchSource:0}: Error finding container 85d94d22052116f48a58507d2bc6c29e967034b4ce14dbd14aaf830d898585a7: Status 404 returned error can't find the container with id 85d94d22052116f48a58507d2bc6c29e967034b4ce14dbd14aaf830d898585a7
	Nov 09 22:06:48 multinode-833232 kubelet[1385]: I1109 22:06:48.741692    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vdwtv" podStartSLOduration=2.741649967 podCreationTimestamp="2023-11-09 22:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-09 22:06:48.728745938 +0000 UTC m=+15.216529120" watchObservedRunningTime="2023-11-09 22:06:48.741649967 +0000 UTC m=+15.229433159"
	Nov 09 22:06:53 multinode-833232 kubelet[1385]: I1109 22:06:53.673556    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jgbc8" podStartSLOduration=7.673515676 podCreationTimestamp="2023-11-09 22:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-09 22:06:48.742263617 +0000 UTC m=+15.230046808" watchObservedRunningTime="2023-11-09 22:06:53.673515676 +0000 UTC m=+20.161298859"
	Nov 09 22:07:18 multinode-833232 kubelet[1385]: I1109 22:07:18.800239    1385 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 09 22:07:18 multinode-833232 kubelet[1385]: I1109 22:07:18.830683    1385 topology_manager.go:215] "Topology Admit Handler" podUID="888d0cf3-ae53-45a9-bfc5-dae176b2f1b4" podNamespace="kube-system" podName="coredns-5dd5756b68-kr4mg"
	Nov 09 22:07:18 multinode-833232 kubelet[1385]: I1109 22:07:18.834554    1385 topology_manager.go:215] "Topology Admit Handler" podUID="5135cf21-5a1c-4fd7-a69e-887e1bccbe91" podNamespace="kube-system" podName="storage-provisioner"
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: I1109 22:07:19.001904    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/888d0cf3-ae53-45a9-bfc5-dae176b2f1b4-config-volume\") pod \"coredns-5dd5756b68-kr4mg\" (UID: \"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4\") " pod="kube-system/coredns-5dd5756b68-kr4mg"
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: I1109 22:07:19.001961    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5135cf21-5a1c-4fd7-a69e-887e1bccbe91-tmp\") pod \"storage-provisioner\" (UID: \"5135cf21-5a1c-4fd7-a69e-887e1bccbe91\") " pod="kube-system/storage-provisioner"
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: I1109 22:07:19.001987    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqsjm\" (UniqueName: \"kubernetes.io/projected/5135cf21-5a1c-4fd7-a69e-887e1bccbe91-kube-api-access-pqsjm\") pod \"storage-provisioner\" (UID: \"5135cf21-5a1c-4fd7-a69e-887e1bccbe91\") " pod="kube-system/storage-provisioner"
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: I1109 22:07:19.002022    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpj5\" (UniqueName: \"kubernetes.io/projected/888d0cf3-ae53-45a9-bfc5-dae176b2f1b4-kube-api-access-hnpj5\") pod \"coredns-5dd5756b68-kr4mg\" (UID: \"888d0cf3-ae53-45a9-bfc5-dae176b2f1b4\") " pod="kube-system/coredns-5dd5756b68-kr4mg"
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: W1109 22:07:19.183361    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio-4a1e098a46dd060b91379d75c95102e33c869dd0fe58daa77e0cfe6348e9809e WatchSource:0}: Error finding container 4a1e098a46dd060b91379d75c95102e33c869dd0fe58daa77e0cfe6348e9809e: Status 404 returned error can't find the container with id 4a1e098a46dd060b91379d75c95102e33c869dd0fe58daa77e0cfe6348e9809e
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: W1109 22:07:19.188085    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio-404fbe9b47e6e84fd7d024d005c18bc64c3ba4389799a78b0d458c9c3b6b7ceb WatchSource:0}: Error finding container 404fbe9b47e6e84fd7d024d005c18bc64c3ba4389799a78b0d458c9c3b6b7ceb: Status 404 returned error can't find the container with id 404fbe9b47e6e84fd7d024d005c18bc64c3ba4389799a78b0d458c9c3b6b7ceb
	Nov 09 22:07:19 multinode-833232 kubelet[1385]: I1109 22:07:19.799614    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.799569095 podCreationTimestamp="2023-11-09 22:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-09 22:07:19.78704345 +0000 UTC m=+46.274826633" watchObservedRunningTime="2023-11-09 22:07:19.799569095 +0000 UTC m=+46.287352286"
	Nov 09 22:08:22 multinode-833232 kubelet[1385]: I1109 22:08:22.821382    1385 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kr4mg" podStartSLOduration=96.821341196 podCreationTimestamp="2023-11-09 22:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-09 22:07:19.800227437 +0000 UTC m=+46.288010629" watchObservedRunningTime="2023-11-09 22:08:22.821341196 +0000 UTC m=+109.309124379"
	Nov 09 22:08:22 multinode-833232 kubelet[1385]: I1109 22:08:22.821642    1385 topology_manager.go:215] "Topology Admit Handler" podUID="fc53fd0a-55d8-487f-a6d0-20b548114f5d" podNamespace="default" podName="busybox-5bc68d56bd-76fbj"
	Nov 09 22:08:22 multinode-833232 kubelet[1385]: I1109 22:08:22.978197    1385 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmqgp\" (UniqueName: \"kubernetes.io/projected/fc53fd0a-55d8-487f-a6d0-20b548114f5d-kube-api-access-tmqgp\") pod \"busybox-5bc68d56bd-76fbj\" (UID: \"fc53fd0a-55d8-487f-a6d0-20b548114f5d\") " pod="default/busybox-5bc68d56bd-76fbj"
	Nov 09 22:08:23 multinode-833232 kubelet[1385]: W1109 22:08:23.168317    1385 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio-888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549 WatchSource:0}: Error finding container 888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549: Status 404 returned error can't find the container with id 888506acd20e6ca7a1c4ef155cf97a6fdb1247bbfa4c4ad925c4949fca776549
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-833232 -n multinode-833232
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-833232 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.48s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1507160070.exe start -p running-upgrade-286866 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1507160070.exe start -p running-upgrade-286866 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m2.57088032s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-286866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-286866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.49644759s)

                                                
                                                
-- stdout --
	* [running-upgrade-286866] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-286866 in cluster running-upgrade-286866
	* Pulling base image ...
	* Updating the running docker "running-upgrade-286866" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:24:35.706759  838368 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:24:35.706978  838368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:24:35.707007  838368 out.go:309] Setting ErrFile to fd 2...
	I1109 22:24:35.707028  838368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:24:35.707318  838368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:24:35.707709  838368 out.go:303] Setting JSON to false
	I1109 22:24:35.709969  838368 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18426,"bootTime":1699550250,"procs":367,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 22:24:35.710087  838368 start.go:138] virtualization:  
	I1109 22:24:35.712530  838368 out.go:177] * [running-upgrade-286866] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 22:24:35.715166  838368 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 22:24:35.717117  838368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 22:24:35.715280  838368 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1109 22:24:35.715313  838368 notify.go:220] Checking for updates...
	I1109 22:24:35.721382  838368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:24:35.723111  838368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 22:24:35.724982  838368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 22:24:35.726801  838368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 22:24:35.729193  838368 config.go:182] Loaded profile config "running-upgrade-286866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:24:35.731822  838368 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1109 22:24:35.734213  838368 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 22:24:35.776961  838368 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 22:24:35.777148  838368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:24:35.910690  838368 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-09 22:24:35.899967516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:24:35.910809  838368 docker.go:295] overlay module found
	I1109 22:24:35.913099  838368 out.go:177] * Using the docker driver based on existing profile
	I1109 22:24:35.916019  838368 start.go:298] selected driver: docker
	I1109 22:24:35.916043  838368 start.go:902] validating driver "docker" against &{Name:running-upgrade-286866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-286866 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.147 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:24:35.916174  838368 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 22:24:35.916922  838368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:24:35.936945  838368 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1109 22:24:36.050286  838368 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-09 22:24:36.038021465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:24:36.050766  838368 cni.go:84] Creating CNI manager for ""
	I1109 22:24:36.050786  838368 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 22:24:36.050797  838368 start_flags.go:323] config:
	{Name:running-upgrade-286866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-286866 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.147 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:24:36.054032  838368 out.go:177] * Starting control plane node running-upgrade-286866 in cluster running-upgrade-286866
	I1109 22:24:36.056319  838368 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 22:24:36.058208  838368 out.go:177] * Pulling base image ...
	I1109 22:24:36.060155  838368 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1109 22:24:36.060355  838368 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1109 22:24:36.100021  838368 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1109 22:24:36.100045  838368 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1109 22:24:36.127245  838368 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1109 22:24:36.127400  838368 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/running-upgrade-286866/config.json ...
	I1109 22:24:36.127554  838368 cache.go:107] acquiring lock: {Name:mk0fb2e9d58bfe32f8d1db761b0337bed1329a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127644  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 22:24:36.127654  838368 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 107.24µs
	I1109 22:24:36.127663  838368 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 22:24:36.127670  838368 cache.go:194] Successfully downloaded all kic artifacts
	I1109 22:24:36.127673  838368 cache.go:107] acquiring lock: {Name:mk562459bcec5403e80f5c62ad32832a54565d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127693  838368 start.go:365] acquiring machines lock for running-upgrade-286866: {Name:mk96d435180670f27a5443d408e3dbf94b326f58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127712  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1109 22:24:36.127718  838368 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 46.146µs
	I1109 22:24:36.127725  838368 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1109 22:24:36.127736  838368 start.go:369] acquired machines lock for "running-upgrade-286866" in 23.68µs
	I1109 22:24:36.127734  838368 cache.go:107] acquiring lock: {Name:mk8de15fecb3746b0f76783348b43f47c8853056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127750  838368 start.go:96] Skipping create...Using existing machine configuration
	I1109 22:24:36.127761  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1109 22:24:36.127765  838368 fix.go:54] fixHost starting: 
	I1109 22:24:36.127766  838368 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 33.887µs
	I1109 22:24:36.127779  838368 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1109 22:24:36.127787  838368 cache.go:107] acquiring lock: {Name:mk8754c3c41de64f33f6d1748d623edc176abb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127818  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1109 22:24:36.127824  838368 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.364µs
	I1109 22:24:36.127831  838368 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1109 22:24:36.127839  838368 cache.go:107] acquiring lock: {Name:mk64c077fdde984c231a6bd4c100c4507daece68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127865  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1109 22:24:36.127871  838368 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 32.074µs
	I1109 22:24:36.127877  838368 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1109 22:24:36.127885  838368 cache.go:107] acquiring lock: {Name:mkb7fbb50808c8f6b1f3a6ba92fa44165f339dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127908  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1109 22:24:36.127913  838368 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 28.833µs
	I1109 22:24:36.127919  838368 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1109 22:24:36.127926  838368 cache.go:107] acquiring lock: {Name:mk8cf6b5fdce6cfda35ea920ea59cac04b0c118e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.127949  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1109 22:24:36.127954  838368 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 28.496µs
	I1109 22:24:36.127960  838368 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1109 22:24:36.127968  838368 cache.go:107] acquiring lock: {Name:mkc21f29362cbbf14e9c030c0d3863baf44f442e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:24:36.128009  838368 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1109 22:24:36.128015  838368 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 48.278µs
	I1109 22:24:36.128021  838368 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1109 22:24:36.128028  838368 cli_runner.go:164] Run: docker container inspect running-upgrade-286866 --format={{.State.Status}}
	I1109 22:24:36.128037  838368 cache.go:87] Successfully saved all images to host disk.
	I1109 22:24:36.155095  838368 fix.go:102] recreateIfNeeded on running-upgrade-286866: state=Running err=<nil>
	W1109 22:24:36.155129  838368 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 22:24:36.158209  838368 out.go:177] * Updating the running docker "running-upgrade-286866" container ...
	I1109 22:24:36.160191  838368 machine.go:88] provisioning docker machine ...
	I1109 22:24:36.160219  838368 ubuntu.go:169] provisioning hostname "running-upgrade-286866"
	I1109 22:24:36.160303  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:36.185735  838368 main.go:141] libmachine: Using SSH client type: native
	I1109 22:24:36.186187  838368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1109 22:24:36.186205  838368 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-286866 && echo "running-upgrade-286866" | sudo tee /etc/hostname
	I1109 22:24:36.353798  838368 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-286866
	
	I1109 22:24:36.353875  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:36.389970  838368 main.go:141] libmachine: Using SSH client type: native
	I1109 22:24:36.390397  838368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1109 22:24:36.390417  838368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-286866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-286866/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-286866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:24:36.551294  838368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:24:36.551325  838368 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:24:36.551357  838368 ubuntu.go:177] setting up certificates
	I1109 22:24:36.551368  838368 provision.go:83] configureAuth start
	I1109 22:24:36.551449  838368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-286866
	I1109 22:24:36.578178  838368 provision.go:138] copyHostCerts
	I1109 22:24:36.578241  838368 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:24:36.578255  838368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:24:36.578413  838368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:24:36.578593  838368 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:24:36.578608  838368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:24:36.578677  838368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:24:36.578803  838368 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:24:36.578815  838368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:24:36.578853  838368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:24:36.578908  838368 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-286866 san=[192.168.70.147 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-286866]
	I1109 22:24:36.836377  838368 provision.go:172] copyRemoteCerts
	I1109 22:24:36.836517  838368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:24:36.836583  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:36.864856  838368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/running-upgrade-286866/id_rsa Username:docker}
	I1109 22:24:36.970472  838368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 22:24:36.997194  838368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:24:37.030774  838368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 22:24:37.058440  838368 provision.go:86] duration metric: configureAuth took 507.054194ms
	I1109 22:24:37.058471  838368 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:24:37.058688  838368 config.go:182] Loaded profile config "running-upgrade-286866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:24:37.058802  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:37.077922  838368 main.go:141] libmachine: Using SSH client type: native
	I1109 22:24:37.078372  838368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33861 <nil> <nil>}
	I1109 22:24:37.078393  838368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:24:37.660154  838368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:24:37.660177  838368 machine.go:91] provisioned docker machine in 1.499967196s
	I1109 22:24:37.660189  838368 start.go:300] post-start starting for "running-upgrade-286866" (driver="docker")
	I1109 22:24:37.660200  838368 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:24:37.660298  838368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:24:37.660344  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:37.679930  838368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/running-upgrade-286866/id_rsa Username:docker}
	I1109 22:24:37.779868  838368 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:24:37.784494  838368 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:24:37.784523  838368 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:24:37.784535  838368 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:24:37.784543  838368 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1109 22:24:37.784558  838368 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:24:37.784617  838368 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:24:37.784703  838368 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:24:37.784817  838368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:24:37.793528  838368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:24:37.816060  838368 start.go:303] post-start completed in 155.85556ms
	I1109 22:24:37.816152  838368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:24:37.816197  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:37.835884  838368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/running-upgrade-286866/id_rsa Username:docker}
	I1109 22:24:37.934441  838368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:24:37.940128  838368 fix.go:56] fixHost completed within 1.81236445s
	I1109 22:24:37.940149  838368 start.go:83] releasing machines lock for "running-upgrade-286866", held for 1.812405852s
	I1109 22:24:37.940217  838368 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-286866
	I1109 22:24:37.958699  838368 ssh_runner.go:195] Run: cat /version.json
	I1109 22:24:37.958753  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:37.958999  838368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 22:24:37.959035  838368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-286866
	I1109 22:24:37.989471  838368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/running-upgrade-286866/id_rsa Username:docker}
	I1109 22:24:38.002491  838368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33861 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/running-upgrade-286866/id_rsa Username:docker}
	W1109 22:24:38.246798  838368 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1109 22:24:38.246878  838368 ssh_runner.go:195] Run: systemctl --version
	I1109 22:24:38.252578  838368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 22:24:38.456690  838368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 22:24:38.464002  838368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:24:38.492022  838368 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 22:24:38.492106  838368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:24:38.522956  838368 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 22:24:38.522979  838368 start.go:472] detecting cgroup driver to use...
	I1109 22:24:38.523011  838368 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 22:24:38.523065  838368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 22:24:38.555190  838368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 22:24:38.568351  838368 docker.go:203] disabling cri-docker service (if available) ...
	I1109 22:24:38.568439  838368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 22:24:38.580791  838368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 22:24:38.594121  838368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1109 22:24:38.608063  838368 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1109 22:24:38.608176  838368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 22:24:38.754322  838368 docker.go:219] disabling docker service ...
	I1109 22:24:38.754435  838368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 22:24:38.768909  838368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 22:24:38.780910  838368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 22:24:38.919985  838368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 22:24:39.063460  838368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 22:24:39.075839  838368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 22:24:39.093821  838368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1109 22:24:39.093898  838368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:24:39.108758  838368 out.go:177] 
	W1109 22:24:39.110752  838368 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1109 22:24:39.110773  838368 out.go:239] * 
	* 
	W1109 22:24:39.111709  838368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 22:24:39.114242  838368 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-286866 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-09 22:24:39.14688185 +0000 UTC m=+3403.888852702
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-286866
helpers_test.go:235: (dbg) docker inspect running-upgrade-286866:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae",
	        "Created": "2023-11-09T22:23:49.935744533Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 834891,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T22:23:50.375363421Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae/hostname",
	        "HostsPath": "/var/lib/docker/containers/a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae/hosts",
	        "LogPath": "/var/lib/docker/containers/a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae/a930dd3da95c8f06782274bdbc8cba1b762c5b1e7a18856b9b62840a062b8fae-json.log",
	        "Name": "/running-upgrade-286866",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-286866:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-286866",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/33b6e4292ae75a9efdaa197e22ab3a0c7db81b8e6c96d1c2875f8a52fdc6b595-init/diff:/var/lib/docker/overlay2/d76ba5b719d15ff955f959297ecac539c692a36c035f078b6597f0791d2d018d/diff:/var/lib/docker/overlay2/917a22c8a32d22431f50f1024f7b0d1c769184977dec621fa23685c5ff8e8cdb/diff:/var/lib/docker/overlay2/6a1676e8d376b69f3dc846fd625c5338a554067017b58efd4c7d4a6aa6031530/diff:/var/lib/docker/overlay2/8ed5ecf64444d0ac8b7720f1c74f7c7342ec0e1b406d880f78f6c69ee0a52c4d/diff:/var/lib/docker/overlay2/4ffcfb3a8d3c8e2f62857f08736a944774cd4dcaf3213ea195c8fedc6e1e38a8/diff:/var/lib/docker/overlay2/ccbd05a2243046be2ffd0791d2bbe7932f1d079f0885d20abca4a2955eeee255/diff:/var/lib/docker/overlay2/bd115b2113a137f0e2e4a936a75fec770c9349d1177866688457c6d44a599fac/diff:/var/lib/docker/overlay2/d4727bfd1a8954338f6808f70d5ee9db4f50680fd1b82c01b6eb1d786e3dcf81/diff:/var/lib/docker/overlay2/64fa46d373fc7b2c9191b13ec413000d17d8eaed92954832b5bb8b5d7e29cef0/diff:/var/lib/docker/overlay2/482045
39bba337cca5410445e5f03881bf7f16a258b718c78853aae622ceac55/diff:/var/lib/docker/overlay2/a79afa2e8bd54057c6f3c5451571f3981fbc3fc4093c46093f5cf67e666e5266/diff:/var/lib/docker/overlay2/cada879aa885ca7aa1d9cc921c33a41c18ffd92eb05868bbf5b89d88dc5567e4/diff:/var/lib/docker/overlay2/59bba10032ab36a3c93678e014930acf417b3f9f35e1f4efbeab6a5f774e97ca/diff:/var/lib/docker/overlay2/6a8da0bba283153f3ba6aefb7352dee8608623ab73c2e5e880f27231c9d36773/diff:/var/lib/docker/overlay2/76ac07df130302c5b3235ab5d8291b34bb50c66b59d14d742bc392c9ceca1e81/diff:/var/lib/docker/overlay2/439120e58cdbc48b2ce323f37bd2c27b3039cfb4536aab49b2008227479ab2b5/diff:/var/lib/docker/overlay2/f0ba613b07ff21a6b27b3da67c9139e8e6ce772f12f73952c2a7c53e10ad0504/diff:/var/lib/docker/overlay2/52ddcc25eef27151bd05a550b900098b4af9e798018fc68a4a7b9607d63554fd/diff:/var/lib/docker/overlay2/2c4e939c8d6ce289c84be20ee1f18cb81de2aa12cd22aba21363c5872ff42eb9/diff:/var/lib/docker/overlay2/c699ab3d3b82d7e0aa43ab721bfac9394ce55dbf43a4759248dc40b698cc7625/diff:/var/lib/d
ocker/overlay2/068e13d1cf33e10f597fd4bae9cf0e2a29048796e554e0f144934cd8caa67ce0/diff:/var/lib/docker/overlay2/356f548bb90f9c7b05555d832b1c464fbe968771d7856d98a12e6de1cf5bf2bf/diff:/var/lib/docker/overlay2/ae32a5dd4cf23b0c165d5826c6efdae46a20bedfd78928bc328084fb7e2dfaf0/diff:/var/lib/docker/overlay2/daae3dedb93122ef620f89ca89312de98760388bbd43aa72ba3f5320fbc7c8cc/diff:/var/lib/docker/overlay2/3c4e89a768be739677b989e9c1e2d612b1c7396d49e47fbcb1ea1aa3a0922d27/diff:/var/lib/docker/overlay2/5d61f50af888692d54008fe325bd34482657c501485c395a4060167e68476d2b/diff:/var/lib/docker/overlay2/e1bdb2596a12408892b15170f0f700c1de29e84a003fb483f1a97fa87104fbf8/diff:/var/lib/docker/overlay2/ed3ed63136123b5e47a95f81829aca87898af5d0e4cec6a109460ce43678b886/diff:/var/lib/docker/overlay2/0d6de2b56a43a41eeff70c7abfa90cc7a635dca8cd2a57b81bb97a85dd2f22a2/diff:/var/lib/docker/overlay2/37c0fad289b43ef7a15ad5ebe7d61110777578768532625eee7dcc47b85204d6/diff:/var/lib/docker/overlay2/815a0719ea7430e487028bbd915296a48521ce42d0a09cd267d939e02cb
44a57/diff:/var/lib/docker/overlay2/298e04f122894721d43d75424e5e04b24378223f93bc295d326a1bb025d20572/diff:/var/lib/docker/overlay2/7a17355e62f95661046c160a4679e0b1d0679b3d489cba02d8f11e08edd638db/diff:/var/lib/docker/overlay2/b6e27627da116f982b7eef49a5702f2ab07fb2ecc9d21e4615e0f1118705d559/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33b6e4292ae75a9efdaa197e22ab3a0c7db81b8e6c96d1c2875f8a52fdc6b595/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33b6e4292ae75a9efdaa197e22ab3a0c7db81b8e6c96d1c2875f8a52fdc6b595/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33b6e4292ae75a9efdaa197e22ab3a0c7db81b8e6c96d1c2875f8a52fdc6b595/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-286866",
	                "Source": "/var/lib/docker/volumes/running-upgrade-286866/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-286866",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-286866",
	                "name.minikube.sigs.k8s.io": "running-upgrade-286866",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3ea610346054b120f6b7fd5b7ef27c48bc76bc5bc6de082cf0909f7ab841b60",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e3ea61034605",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-286866": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.147"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a930dd3da95c",
	                        "running-upgrade-286866"
	                    ],
	                    "NetworkID": "dab91e345810dc47cc5a62fd6d28ba074d241fb43e45e42d67c7a1cbfe5256d7",
	                    "EndpointID": "9c48019ecffbd2287709939137cb8c6d99d037beebf57e5da1da7ce549695e5d",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.147",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:93",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-286866 -n running-upgrade-286866
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-286866 -n running-upgrade-286866: exit status 4 (378.716722ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 22:24:39.495240  839053 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-286866" does not appear in /home/jenkins/minikube-integration/17565-708188/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-286866" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-286866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-286866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-286866: (3.146829877s)
--- FAIL: TestRunningBinaryUpgrade (70.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.1699467251.exe start -p missing-upgrade-701984 --memory=2200 --driver=docker  --container-runtime=crio
E1109 22:19:38.634079  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.1699467251.exe start -p missing-upgrade-701984 --memory=2200 --driver=docker  --container-runtime=crio: (2m13.110732785s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-701984
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-701984: (1.6756581s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-701984
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-701984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-701984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (35.926427456s)

                                                
                                                
-- stdout --
	* [missing-upgrade-701984] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-701984 in cluster missing-upgrade-701984
	* Pulling base image ...
	* docker "missing-upgrade-701984" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:21:20.214142  824912 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:21:20.214349  824912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:21:20.214361  824912 out.go:309] Setting ErrFile to fd 2...
	I1109 22:21:20.214367  824912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:21:20.214659  824912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:21:20.215141  824912 out.go:303] Setting JSON to false
	I1109 22:21:20.216407  824912 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18230,"bootTime":1699550250,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 22:21:20.216480  824912 start.go:138] virtualization:  
	I1109 22:21:20.220369  824912 out.go:177] * [missing-upgrade-701984] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 22:21:20.222447  824912 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 22:21:20.224127  824912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 22:21:20.222531  824912 notify.go:220] Checking for updates...
	I1109 22:21:20.229121  824912 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:21:20.231291  824912 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 22:21:20.232972  824912 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 22:21:20.234541  824912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 22:21:20.236881  824912 config.go:182] Loaded profile config "missing-upgrade-701984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:21:20.239155  824912 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1109 22:21:20.240758  824912 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 22:21:20.268187  824912 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 22:21:20.268291  824912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:21:20.362760  824912 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-11-09 22:21:20.35214236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:21:20.362869  824912 docker.go:295] overlay module found
	I1109 22:21:20.366411  824912 out.go:177] * Using the docker driver based on existing profile
	I1109 22:21:20.368238  824912 start.go:298] selected driver: docker
	I1109 22:21:20.368252  824912 start.go:902] validating driver "docker" against &{Name:missing-upgrade-701984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-701984 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:21:20.368344  824912 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 22:21:20.369025  824912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:21:20.435792  824912 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2023-11-09 22:21:20.426218416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:21:20.436117  824912 cni.go:84] Creating CNI manager for ""
	I1109 22:21:20.436141  824912 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 22:21:20.436153  824912 start_flags.go:323] config:
	{Name:missing-upgrade-701984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-701984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.33 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:21:20.439485  824912 out.go:177] * Starting control plane node missing-upgrade-701984 in cluster missing-upgrade-701984
	I1109 22:21:20.441658  824912 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 22:21:20.445284  824912 out.go:177] * Pulling base image ...
	I1109 22:21:20.446994  824912 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1109 22:21:20.447075  824912 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1109 22:21:20.468612  824912 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1109 22:21:20.468822  824912 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1109 22:21:20.469629  824912 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1109 22:21:20.519621  824912 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1109 22:21:20.519828  824912 cache.go:107] acquiring lock: {Name:mk0fb2e9d58bfe32f8d1db761b0337bed1329a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.519925  824912 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 22:21:20.519941  824912 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.063µs
	I1109 22:21:20.519960  824912 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 22:21:20.519974  824912 cache.go:107] acquiring lock: {Name:mk562459bcec5403e80f5c62ad32832a54565d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.520088  824912 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1109 22:21:20.520255  824912 cache.go:107] acquiring lock: {Name:mk8de15fecb3746b0f76783348b43f47c8853056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.520384  824912 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1109 22:21:20.520550  824912 cache.go:107] acquiring lock: {Name:mk8754c3c41de64f33f6d1748d623edc176abb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.520637  824912 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1109 22:21:20.520726  824912 cache.go:107] acquiring lock: {Name:mk64c077fdde984c231a6bd4c100c4507daece68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.520792  824912 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1109 22:21:20.520967  824912 cache.go:107] acquiring lock: {Name:mkb7fbb50808c8f6b1f3a6ba92fa44165f339dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.521047  824912 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1109 22:21:20.521172  824912 cache.go:107] acquiring lock: {Name:mk8cf6b5fdce6cfda35ea920ea59cac04b0c118e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.521246  824912 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1109 22:21:20.521342  824912 cache.go:107] acquiring lock: {Name:mkc21f29362cbbf14e9c030c0d3863baf44f442e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:20.521405  824912 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1109 22:21:20.522437  824912 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1109 22:21:20.522809  824912 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1109 22:21:20.522971  824912 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1109 22:21:20.523156  824912 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1109 22:21:20.523315  824912 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1109 22:21:20.523440  824912 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1109 22:21:20.523562  824912 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1109 22:21:20.523683  824912 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/missing-upgrade-701984/config.json ...
	W1109 22:21:20.893548  824912 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1109 22:21:20.893688  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1109 22:21:20.906952  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W1109 22:21:20.924896  824912 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1109 22:21:20.925001  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1109 22:21:20.926946  824912 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1109 22:21:20.927057  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1109 22:21:20.928633  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1109 22:21:20.931665  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I1109 22:21:20.970675  824912 cache.go:162] opening:  /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1109 22:21:21.126745  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1109 22:21:21.126773  824912 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 605.808203ms
	I1109 22:21:21.126786  824912 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  4.94 MiB / 287.99 MiB [>_] 1.71% ? p/s ?I1109 22:21:21.413091  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1109 22:21:21.413173  824912 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 891.83004ms
	I1109 22:21:21.413200  824912 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  22.06 MiB / 287.99 MiB [>] 7.66% ? p/s ?I1109 22:21:21.576789  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1109 22:21:21.576828  824912 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.056270092s
	I1109 22:21:21.576842  824912 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.12 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.12 MiB     > gcr.io/k8s-minikube/kicbase...:  26.02 MiB / 287.99 MiB  9.03% 43.12 MiB     > gcr.io/k8s-minikube/kicbase...:  39.36 MiB / 287.99 MiB  13.67% 41.79 MiBI1109 22:21:22.383128  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1109 22:21:22.383158  824912 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.863181995s
	I1109 22:21:22.383172  824912 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 41.79 MiBI1109 22:21:22.535625  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1109 22:21:22.535745  824912 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 2.015492612s
	I1109 22:21:22.536053  824912 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  61.08 MiB / 287.99 MiB  21.21% 41.79 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 42.14 MiBI1109 22:21:22.961197  824912 cache.go:157] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1109 22:21:22.961224  824912 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.440499702s
	I1109 22:21:22.961276  824912 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  69.79 MiB / 287.99 MiB  24.23% 42.14 MiB    > gcr.io/k8s-minikube/kicbase...:  76.25 MiB / 287.99 MiB  26.48% 42.14 MiB    > gcr.io/k8s-minikube/kicbase...:  83.67 MiB / 287.99 MiB  29.05% 41.13 MiB    > gcr.io/k8s-minikube/kicbase...:  92.48 MiB / 287.99 MiB  32.11% 41.13 MiB    > gcr.io/k8s-minikube/kicbase...:  102.18 MiB / 287.99 MiB  35.48% 41.13 Mi    > gcr.io/k8s-minikube/kicbase...:  108.99 MiB / 287.99 MiB  37.84% 41.20 Mi    > gcr.io/k8s-minikube/kicbase...:  112.99 MiB / 287.99 MiB  39.23% 41.20 Mi    > gcr.io/k8s-minikube/kicbase...:  116.99 MiB / 287.99 MiB  40.62% 41.20 Mi    > gcr.io/k8s-minikube/kicbase...:  123.43 MiB / 287.99 MiB  42.86% 40.10 Mi    > gcr.io/k8s-minikube/kicbase...:  130.18 MiB / 287.99 MiB  45.20% 40.10 Mi    > gcr.io/k8s-minikube/kicbase...:  138.48 MiB / 287.99 MiB  48.08% 40.10 Mi    > gcr.io/k8s-minikube/kicbase...:  146.48 MiB / 287.99 MiB  50.86% 39.99 MiI1109 22:21:25.428128  824912 cache.go:157] /home/jenkins/minikube-
integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1109 22:21:25.428156  824912 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.906986417s
	I1109 22:21:25.428231  824912 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1109 22:21:25.428261  824912 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  154.12 MiB / 287.99 MiB  53.52% 39.99 Mi    > gcr.io/k8s-minikube/kicbase...:  161.81 MiB / 287.99 MiB  56.19% 39.99 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 40.12 Mi    > gcr.io/k8s-minikube/kicbase...:  177.60 MiB / 287.99 MiB  61.67% 40.12 Mi    > gcr.io/k8s-minikube/kicbase...:  203.67 MiB / 287.99 MiB  70.72% 40.12 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 41.61 Mi    > gcr.io/k8s-minikube/kicbase...:  232.64 MiB / 287.99 MiB  80.78% 41.61 Mi    > gcr.io/k8s-minikube/kicbase...:  238.14 MiB / 287.99 MiB  82.69% 41.61 Mi    > gcr.io/k8s-minikube/kicbase...:  264.06 MiB / 287.99 MiB  91.69% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  268.40 MiB / 287.99 MiB  93.20% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  286.89 MiB / 287.99 MiB  99.62% 44.78 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 44.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.
99% 44.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 44.46 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.60 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 41.60 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 41.60 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 37.29 MI1109 22:21:28.774189  824912 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1109 22:21:28.774237  824912 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1109 22:21:29.678056  824912 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1109 22:21:29.678089  824912 cache.go:194] Successfully downloaded all kic artifacts
	I1109 22:21:29.678142  824912 start.go:365] acquiring machines lock for missing-upgrade-701984: {Name:mk7f72f0b5ac28c8f61ea8ffe792e69acda3f861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:21:29.678210  824912 start.go:369] acquired machines lock for "missing-upgrade-701984" in 44.406µs
	I1109 22:21:29.678235  824912 start.go:96] Skipping create...Using existing machine configuration
	I1109 22:21:29.678250  824912 fix.go:54] fixHost starting: 
	I1109 22:21:29.678541  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:29.695114  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:29.695171  824912 fix.go:102] recreateIfNeeded on missing-upgrade-701984: state= err=unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:29.695191  824912 fix.go:107] machineExists: false. err=machine does not exist
	I1109 22:21:29.697247  824912 out.go:177] * docker "missing-upgrade-701984" container is missing, will recreate.
	I1109 22:21:29.699177  824912 delete.go:124] DEMOLISHING missing-upgrade-701984 ...
	I1109 22:21:29.699280  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:29.714930  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	W1109 22:21:29.714988  824912 stop.go:75] unable to get state: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:29.715011  824912 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:29.715487  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:29.741450  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:29.741548  824912 delete.go:82] Unable to get host status for missing-upgrade-701984, assuming it has already been deleted: state: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:29.741647  824912 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-701984
	W1109 22:21:29.772940  824912 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-701984 returned with exit code 1
	I1109 22:21:29.772976  824912 kic.go:371] could not find the container missing-upgrade-701984 to remove it. will try anyways
	I1109 22:21:29.773035  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:29.791409  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	W1109 22:21:29.791467  824912 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:29.791535  824912 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-701984 /bin/bash -c "sudo init 0"
	W1109 22:21:29.810542  824912 cli_runner.go:211] docker exec --privileged -t missing-upgrade-701984 /bin/bash -c "sudo init 0" returned with exit code 1
	I1109 22:21:29.810572  824912 oci.go:650] error shutdown missing-upgrade-701984: docker exec --privileged -t missing-upgrade-701984 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:30.811590  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:30.834516  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:30.834580  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:30.834590  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:30.834624  824912 retry.go:31] will retry after 412.414713ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:31.247185  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:31.276556  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:31.276620  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:31.276634  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:31.276661  824912 retry.go:31] will retry after 946.370477ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:32.223758  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:32.266574  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:32.266636  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:32.266648  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:32.266679  824912 retry.go:31] will retry after 959.446461ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:33.226459  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:33.259178  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:33.259236  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:33.259244  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:33.259271  824912 retry.go:31] will retry after 983.44078ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:34.243423  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:34.270163  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:34.270222  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:34.270231  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:34.270257  824912 retry.go:31] will retry after 1.948425878s: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:36.219567  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:36.238595  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:36.238661  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:36.238673  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:36.238699  824912 retry.go:31] will retry after 5.498418241s: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:41.737533  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:41.774269  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:41.774340  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:41.774350  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:41.774375  824912 retry.go:31] will retry after 3.866553074s: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:45.641129  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:45.661824  824912 cli_runner.go:211] docker container inspect missing-upgrade-701984 --format={{.State.Status}} returned with exit code 1
	I1109 22:21:45.661884  824912 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	I1109 22:21:45.661894  824912 oci.go:664] temporary error: container missing-upgrade-701984 status is  but expect it to be exited
	I1109 22:21:45.661931  824912 oci.go:88] couldn't shut down missing-upgrade-701984 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-701984": docker container inspect missing-upgrade-701984 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-701984
	 
	I1109 22:21:45.661995  824912 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-701984
	I1109 22:21:45.694750  824912 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-701984
	W1109 22:21:45.713951  824912 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-701984 returned with exit code 1
	I1109 22:21:45.714038  824912 cli_runner.go:164] Run: docker network inspect missing-upgrade-701984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:21:45.735570  824912 cli_runner.go:164] Run: docker network rm missing-upgrade-701984
	I1109 22:21:45.873066  824912 fix.go:114] Sleeping 1 second for extra luck!
	I1109 22:21:46.873193  824912 start.go:125] createHost starting for "" (driver="docker")
	I1109 22:21:46.878031  824912 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 22:21:46.878196  824912 start.go:159] libmachine.API.Create for "missing-upgrade-701984" (driver="docker")
	I1109 22:21:46.878217  824912 client.go:168] LocalClient.Create starting
	I1109 22:21:46.878293  824912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem
	I1109 22:21:46.878348  824912 main.go:141] libmachine: Decoding PEM data...
	I1109 22:21:46.878364  824912 main.go:141] libmachine: Parsing certificate...
	I1109 22:21:46.878424  824912 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem
	I1109 22:21:46.878445  824912 main.go:141] libmachine: Decoding PEM data...
	I1109 22:21:46.878455  824912 main.go:141] libmachine: Parsing certificate...
	I1109 22:21:46.878713  824912 cli_runner.go:164] Run: docker network inspect missing-upgrade-701984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 22:21:46.901131  824912 cli_runner.go:211] docker network inspect missing-upgrade-701984 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 22:21:46.901210  824912 network_create.go:281] running [docker network inspect missing-upgrade-701984] to gather additional debugging logs...
	I1109 22:21:46.901226  824912 cli_runner.go:164] Run: docker network inspect missing-upgrade-701984
	W1109 22:21:46.924840  824912 cli_runner.go:211] docker network inspect missing-upgrade-701984 returned with exit code 1
	I1109 22:21:46.924868  824912 network_create.go:284] error running [docker network inspect missing-upgrade-701984]: docker network inspect missing-upgrade-701984: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-701984 not found
	I1109 22:21:46.924882  824912 network_create.go:286] output of [docker network inspect missing-upgrade-701984]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-701984 not found
	
	** /stderr **
	I1109 22:21:46.925002  824912 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 22:21:46.950222  824912 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c8ab7f0d0118 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:72:9b:ff:43} reservation:<nil>}
	I1109 22:21:46.950588  824912 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-44f783ceb53c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:61:04:2b:bf} reservation:<nil>}
	I1109 22:21:46.950927  824912 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-35ce033bae78 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:18:06:79:94} reservation:<nil>}
	I1109 22:21:46.951336  824912 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002369240}
	I1109 22:21:46.951354  824912 network_create.go:124] attempt to create docker network missing-upgrade-701984 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 22:21:46.951430  824912 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-701984 missing-upgrade-701984
	I1109 22:21:47.044154  824912 network_create.go:108] docker network missing-upgrade-701984 192.168.76.0/24 created
	I1109 22:21:47.044187  824912 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-701984" container
	I1109 22:21:47.044260  824912 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 22:21:47.079874  824912 cli_runner.go:164] Run: docker volume create missing-upgrade-701984 --label name.minikube.sigs.k8s.io=missing-upgrade-701984 --label created_by.minikube.sigs.k8s.io=true
	I1109 22:21:47.097698  824912 oci.go:103] Successfully created a docker volume missing-upgrade-701984
	I1109 22:21:47.097782  824912 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-701984-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-701984 --entrypoint /usr/bin/test -v missing-upgrade-701984:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1109 22:21:47.830170  824912 oci.go:107] Successfully prepared a docker volume missing-upgrade-701984
	I1109 22:21:47.830200  824912 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1109 22:21:47.830741  824912 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 22:21:47.830875  824912 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 22:21:47.934722  824912 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-701984 --name missing-upgrade-701984 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-701984 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-701984 --network missing-upgrade-701984 --ip 192.168.76.2 --volume missing-upgrade-701984:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1109 22:21:48.443250  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Running}}
	I1109 22:21:48.481451  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	I1109 22:21:48.515578  824912 cli_runner.go:164] Run: docker exec missing-upgrade-701984 stat /var/lib/dpkg/alternatives/iptables
	I1109 22:21:48.592341  824912 oci.go:144] the created container "missing-upgrade-701984" has a running status.
	I1109 22:21:48.592372  824912 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa...
	I1109 22:21:48.844407  824912 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 22:21:48.869931  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	I1109 22:21:48.899461  824912 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 22:21:48.899486  824912 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-701984 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 22:21:48.985597  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	I1109 22:21:49.030089  824912 machine.go:88] provisioning docker machine ...
	I1109 22:21:49.030121  824912 ubuntu.go:169] provisioning hostname "missing-upgrade-701984"
	I1109 22:21:49.030184  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:49.057133  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:49.058243  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:49.058269  824912 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-701984 && echo "missing-upgrade-701984" | sudo tee /etc/hostname
	I1109 22:21:49.058952  824912 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 22:21:52.244672  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-701984
	
	I1109 22:21:52.244748  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:52.291771  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:52.292171  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:52.292188  824912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-701984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-701984/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-701984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:21:52.448102  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:21:52.448187  824912 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:21:52.448252  824912 ubuntu.go:177] setting up certificates
	I1109 22:21:52.448281  824912 provision.go:83] configureAuth start
	I1109 22:21:52.448368  824912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-701984
	I1109 22:21:52.475651  824912 provision.go:138] copyHostCerts
	I1109 22:21:52.475715  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:21:52.475728  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:21:52.475806  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:21:52.475898  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:21:52.475908  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:21:52.475961  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:21:52.476022  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:21:52.476031  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:21:52.476056  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:21:52.476117  824912 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-701984 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-701984]
	I1109 22:21:52.783752  824912 provision.go:172] copyRemoteCerts
	I1109 22:21:52.783823  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:21:52.783881  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:52.806427  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:52.903773  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:21:52.928806  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 22:21:52.952628  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 22:21:52.976200  824912 provision.go:86] duration metric: configureAuth took 527.894959ms
	I1109 22:21:52.976222  824912 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:21:52.976413  824912 config.go:182] Loaded profile config "missing-upgrade-701984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:21:52.976518  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:53.005508  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:53.005985  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:53.006011  824912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:21:53.421024  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:21:53.421090  824912 machine.go:91] provisioned docker machine in 4.390979642s
	I1109 22:21:53.421114  824912 client.go:171] LocalClient.Create took 6.542890402s
	I1109 22:21:53.421141  824912 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-701984" took 6.542947264s
	I1109 22:21:53.421184  824912 start.go:300] post-start starting for "missing-upgrade-701984" (driver="docker")
	I1109 22:21:53.421206  824912 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:21:53.421303  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:21:53.421364  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:53.440328  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:53.543588  824912 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:21:53.547295  824912 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:21:53.547323  824912 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:21:53.547336  824912 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:21:53.547343  824912 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1109 22:21:53.547376  824912 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:21:53.547451  824912 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:21:53.547545  824912 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:21:53.547653  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:21:53.556131  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:21:53.577982  824912 start.go:303] post-start completed in 156.771365ms
	I1109 22:21:53.578473  824912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-701984
	I1109 22:21:53.596341  824912 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/missing-upgrade-701984/config.json ...
	I1109 22:21:53.596621  824912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:21:53.596674  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:53.614450  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:53.709456  824912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:21:53.714760  824912 start.go:128] duration metric: createHost completed in 6.841529176s
	I1109 22:21:53.714847  824912 cli_runner.go:164] Run: docker container inspect missing-upgrade-701984 --format={{.State.Status}}
	W1109 22:21:53.732449  824912 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 22:21:53.732477  824912 machine.go:88] provisioning docker machine ...
	I1109 22:21:53.732495  824912 ubuntu.go:169] provisioning hostname "missing-upgrade-701984"
	I1109 22:21:53.732558  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:53.749992  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:53.750497  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:53.750517  824912 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-701984 && echo "missing-upgrade-701984" | sudo tee /etc/hostname
	I1109 22:21:53.901471  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-701984
	
	I1109 22:21:53.901549  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:53.920361  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:53.920763  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:53.920781  824912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-701984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-701984/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-701984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:21:54.063425  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:21:54.063453  824912 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:21:54.063478  824912 ubuntu.go:177] setting up certificates
	I1109 22:21:54.063490  824912 provision.go:83] configureAuth start
	I1109 22:21:54.063552  824912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-701984
	I1109 22:21:54.082248  824912 provision.go:138] copyHostCerts
	I1109 22:21:54.082464  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:21:54.082480  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:21:54.082565  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:21:54.082668  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:21:54.082680  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:21:54.082708  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:21:54.082765  824912 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:21:54.082773  824912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:21:54.082797  824912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:21:54.082848  824912 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-701984 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-701984]
	I1109 22:21:54.262278  824912 provision.go:172] copyRemoteCerts
	I1109 22:21:54.262368  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:21:54.262414  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:54.279790  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:54.379338  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:21:54.402348  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 22:21:54.423226  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 22:21:54.445109  824912 provision.go:86] duration metric: configureAuth took 381.605515ms
	I1109 22:21:54.445139  824912 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:21:54.445313  824912 config.go:182] Loaded profile config "missing-upgrade-701984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:21:54.445431  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:54.463656  824912 main.go:141] libmachine: Using SSH client type: native
	I1109 22:21:54.464063  824912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1109 22:21:54.464087  824912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:21:54.788994  824912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:21:54.789018  824912 machine.go:91] provisioned docker machine in 1.0565326s
	I1109 22:21:54.789030  824912 start.go:300] post-start starting for "missing-upgrade-701984" (driver="docker")
	I1109 22:21:54.789040  824912 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:21:54.789110  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:21:54.789157  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:54.808661  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:54.907739  824912 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:21:54.911982  824912 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:21:54.912011  824912 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:21:54.912023  824912 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:21:54.912032  824912 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1109 22:21:54.912042  824912 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:21:54.912098  824912 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:21:54.912185  824912 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:21:54.912295  824912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:21:54.921021  824912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:21:54.943991  824912 start.go:303] post-start completed in 154.945825ms
	I1109 22:21:54.944072  824912 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:21:54.944121  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:54.965560  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:55.068894  824912 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:21:55.075074  824912 fix.go:56] fixHost completed within 25.39682016s
	I1109 22:21:55.075104  824912 start.go:83] releasing machines lock for "missing-upgrade-701984", held for 25.396881584s
	I1109 22:21:55.075195  824912 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-701984
	I1109 22:21:55.094167  824912 ssh_runner.go:195] Run: cat /version.json
	I1109 22:21:55.094223  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:55.094276  824912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 22:21:55.094417  824912 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-701984
	I1109 22:21:55.117204  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	I1109 22:21:55.119524  824912 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/missing-upgrade-701984/id_rsa Username:docker}
	W1109 22:21:55.215884  824912 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1109 22:21:55.217164  824912 ssh_runner.go:195] Run: systemctl --version
	I1109 22:21:55.366597  824912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 22:21:55.474124  824912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 22:21:55.479604  824912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:21:55.501393  824912 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 22:21:55.501472  824912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:21:55.536664  824912 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 22:21:55.536687  824912 start.go:472] detecting cgroup driver to use...
	I1109 22:21:55.536717  824912 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 22:21:55.536774  824912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 22:21:55.565059  824912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 22:21:55.576816  824912 docker.go:203] disabling cri-docker service (if available) ...
	I1109 22:21:55.576905  824912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 22:21:55.589298  824912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 22:21:55.601037  824912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1109 22:21:55.613693  824912 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1109 22:21:55.613765  824912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 22:21:55.728557  824912 docker.go:219] disabling docker service ...
	I1109 22:21:55.728627  824912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 22:21:55.744453  824912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 22:21:55.758636  824912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 22:21:55.867024  824912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 22:21:56.009476  824912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 22:21:56.021924  824912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 22:21:56.039275  824912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1109 22:21:56.039343  824912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:21:56.052067  824912 out.go:177] 
	W1109 22:21:56.053950  824912 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1109 22:21:56.053966  824912 out.go:239] * 
	* 
	W1109 22:21:56.054925  824912 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 22:21:56.057047  824912 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-701984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-11-09 22:21:56.097097519 +0000 UTC m=+3240.839068379
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-701984
helpers_test.go:235: (dbg) docker inspect missing-upgrade-701984:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4",
	        "Created": "2023-11-09T22:21:47.952862987Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 826978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-09T22:21:48.435375044Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4/hosts",
	        "LogPath": "/var/lib/docker/containers/7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4/7b5646eec8b2c01f833a4c156f02ab506bb94c191d5e2c7113ef31b470d7b2b4-json.log",
	        "Name": "/missing-upgrade-701984",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-701984:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-701984",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2f1094ff07b0e4c07f54c2a6e6563a1a7fdc1fb24a26b2b255050b85575a66e0-init/diff:/var/lib/docker/overlay2/d76ba5b719d15ff955f959297ecac539c692a36c035f078b6597f0791d2d018d/diff:/var/lib/docker/overlay2/917a22c8a32d22431f50f1024f7b0d1c769184977dec621fa23685c5ff8e8cdb/diff:/var/lib/docker/overlay2/6a1676e8d376b69f3dc846fd625c5338a554067017b58efd4c7d4a6aa6031530/diff:/var/lib/docker/overlay2/8ed5ecf64444d0ac8b7720f1c74f7c7342ec0e1b406d880f78f6c69ee0a52c4d/diff:/var/lib/docker/overlay2/4ffcfb3a8d3c8e2f62857f08736a944774cd4dcaf3213ea195c8fedc6e1e38a8/diff:/var/lib/docker/overlay2/ccbd05a2243046be2ffd0791d2bbe7932f1d079f0885d20abca4a2955eeee255/diff:/var/lib/docker/overlay2/bd115b2113a137f0e2e4a936a75fec770c9349d1177866688457c6d44a599fac/diff:/var/lib/docker/overlay2/d4727bfd1a8954338f6808f70d5ee9db4f50680fd1b82c01b6eb1d786e3dcf81/diff:/var/lib/docker/overlay2/64fa46d373fc7b2c9191b13ec413000d17d8eaed92954832b5bb8b5d7e29cef0/diff:/var/lib/docker/overlay2/482045
39bba337cca5410445e5f03881bf7f16a258b718c78853aae622ceac55/diff:/var/lib/docker/overlay2/a79afa2e8bd54057c6f3c5451571f3981fbc3fc4093c46093f5cf67e666e5266/diff:/var/lib/docker/overlay2/cada879aa885ca7aa1d9cc921c33a41c18ffd92eb05868bbf5b89d88dc5567e4/diff:/var/lib/docker/overlay2/59bba10032ab36a3c93678e014930acf417b3f9f35e1f4efbeab6a5f774e97ca/diff:/var/lib/docker/overlay2/6a8da0bba283153f3ba6aefb7352dee8608623ab73c2e5e880f27231c9d36773/diff:/var/lib/docker/overlay2/76ac07df130302c5b3235ab5d8291b34bb50c66b59d14d742bc392c9ceca1e81/diff:/var/lib/docker/overlay2/439120e58cdbc48b2ce323f37bd2c27b3039cfb4536aab49b2008227479ab2b5/diff:/var/lib/docker/overlay2/f0ba613b07ff21a6b27b3da67c9139e8e6ce772f12f73952c2a7c53e10ad0504/diff:/var/lib/docker/overlay2/52ddcc25eef27151bd05a550b900098b4af9e798018fc68a4a7b9607d63554fd/diff:/var/lib/docker/overlay2/2c4e939c8d6ce289c84be20ee1f18cb81de2aa12cd22aba21363c5872ff42eb9/diff:/var/lib/docker/overlay2/c699ab3d3b82d7e0aa43ab721bfac9394ce55dbf43a4759248dc40b698cc7625/diff:/var/lib/d
ocker/overlay2/068e13d1cf33e10f597fd4bae9cf0e2a29048796e554e0f144934cd8caa67ce0/diff:/var/lib/docker/overlay2/356f548bb90f9c7b05555d832b1c464fbe968771d7856d98a12e6de1cf5bf2bf/diff:/var/lib/docker/overlay2/ae32a5dd4cf23b0c165d5826c6efdae46a20bedfd78928bc328084fb7e2dfaf0/diff:/var/lib/docker/overlay2/daae3dedb93122ef620f89ca89312de98760388bbd43aa72ba3f5320fbc7c8cc/diff:/var/lib/docker/overlay2/3c4e89a768be739677b989e9c1e2d612b1c7396d49e47fbcb1ea1aa3a0922d27/diff:/var/lib/docker/overlay2/5d61f50af888692d54008fe325bd34482657c501485c395a4060167e68476d2b/diff:/var/lib/docker/overlay2/e1bdb2596a12408892b15170f0f700c1de29e84a003fb483f1a97fa87104fbf8/diff:/var/lib/docker/overlay2/ed3ed63136123b5e47a95f81829aca87898af5d0e4cec6a109460ce43678b886/diff:/var/lib/docker/overlay2/0d6de2b56a43a41eeff70c7abfa90cc7a635dca8cd2a57b81bb97a85dd2f22a2/diff:/var/lib/docker/overlay2/37c0fad289b43ef7a15ad5ebe7d61110777578768532625eee7dcc47b85204d6/diff:/var/lib/docker/overlay2/815a0719ea7430e487028bbd915296a48521ce42d0a09cd267d939e02cb
44a57/diff:/var/lib/docker/overlay2/298e04f122894721d43d75424e5e04b24378223f93bc295d326a1bb025d20572/diff:/var/lib/docker/overlay2/7a17355e62f95661046c160a4679e0b1d0679b3d489cba02d8f11e08edd638db/diff:/var/lib/docker/overlay2/b6e27627da116f982b7eef49a5702f2ab07fb2ecc9d21e4615e0f1118705d559/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2f1094ff07b0e4c07f54c2a6e6563a1a7fdc1fb24a26b2b255050b85575a66e0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2f1094ff07b0e4c07f54c2a6e6563a1a7fdc1fb24a26b2b255050b85575a66e0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2f1094ff07b0e4c07f54c2a6e6563a1a7fdc1fb24a26b2b255050b85575a66e0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-701984",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-701984/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-701984",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-701984",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-701984",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69d82b20f0d97e132b3a5d2e814684803ca4df9f1933b87a889ebadae5486fc6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/69d82b20f0d9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-701984": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7b5646eec8b2",
	                        "missing-upgrade-701984"
	                    ],
	                    "NetworkID": "4fe563e6741ca0ab8ff0c54cb8125613e01ce0ab583ef63ec9816487d531d11e",
	                    "EndpointID": "ecbfb38afc2fedb79ec0618a2e640f5bf7692bade0dd9dc2282b4079c9af6684",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-701984 -n missing-upgrade-701984
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-701984 -n missing-upgrade-701984: exit status 6 (336.286668ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 22:21:56.438909  828147 status.go:415] kubeconfig endpoint: got: 192.168.59.33:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-701984" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-701984" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-701984
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-701984: (1.902377793s)
--- FAIL: TestMissingContainerUpgrade (174.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.604690962.exe start -p stopped-upgrade-713444 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.604690962.exe start -p stopped-upgrade-713444 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.987641133s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.604690962.exe -p stopped-upgrade-713444 stop
E1109 22:23:15.586223  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.604690962.exe -p stopped-upgrade-713444 stop: (20.467835988s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-713444 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-713444 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.82768833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-713444] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-713444 in cluster stopped-upgrade-713444
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-713444" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:23:21.996140  832406 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:23:21.996399  832406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:23:21.996429  832406 out.go:309] Setting ErrFile to fd 2...
	I1109 22:23:21.996450  832406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:23:21.996717  832406 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:23:21.997097  832406 out.go:303] Setting JSON to false
	I1109 22:23:21.998347  832406 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18352,"bootTime":1699550250,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 22:23:21.998527  832406 start.go:138] virtualization:  
	I1109 22:23:22.001137  832406 out.go:177] * [stopped-upgrade-713444] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 22:23:22.004302  832406 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 22:23:22.006393  832406 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 22:23:22.004389  832406 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1109 22:23:22.008370  832406 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:23:22.004431  832406 notify.go:220] Checking for updates...
	I1109 22:23:22.012537  832406 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 22:23:22.014578  832406 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 22:23:22.016801  832406 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 22:23:22.019243  832406 config.go:182] Loaded profile config "stopped-upgrade-713444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:23:22.021570  832406 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1109 22:23:22.023413  832406 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 22:23:22.053366  832406 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 22:23:22.053462  832406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:23:22.145151  832406 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1109 22:23:22.173773  832406 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-09 22:23:22.164179295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:23:22.173886  832406 docker.go:295] overlay module found
	I1109 22:23:22.177919  832406 out.go:177] * Using the docker driver based on existing profile
	I1109 22:23:22.180013  832406 start.go:298] selected driver: docker
	I1109 22:23:22.180033  832406 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-713444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-713444 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.229 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:23:22.180131  832406 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 22:23:22.180740  832406 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:23:22.253202  832406 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-09 22:23:22.243694241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:23:22.253563  832406 cni.go:84] Creating CNI manager for ""
	I1109 22:23:22.253584  832406 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 22:23:22.253597  832406 start_flags.go:323] config:
	{Name:stopped-upgrade-713444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-713444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.229 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1109 22:23:22.256022  832406 out.go:177] * Starting control plane node stopped-upgrade-713444 in cluster stopped-upgrade-713444
	I1109 22:23:22.258074  832406 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 22:23:22.260095  832406 out.go:177] * Pulling base image ...
	I1109 22:23:22.262093  832406 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1109 22:23:22.262155  832406 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1109 22:23:22.279559  832406 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1109 22:23:22.279584  832406 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1109 22:23:22.335419  832406 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1109 22:23:22.335559  832406 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/stopped-upgrade-713444/config.json ...
	I1109 22:23:22.335667  832406 cache.go:107] acquiring lock: {Name:mk0fb2e9d58bfe32f8d1db761b0337bed1329a4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335754  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 22:23:22.335764  832406 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.728µs
	I1109 22:23:22.335773  832406 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 22:23:22.335783  832406 cache.go:107] acquiring lock: {Name:mk562459bcec5403e80f5c62ad32832a54565d1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335808  832406 cache.go:194] Successfully downloaded all kic artifacts
	I1109 22:23:22.335817  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1109 22:23:22.335823  832406 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 41.485µs
	I1109 22:23:22.335830  832406 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1109 22:23:22.335838  832406 cache.go:107] acquiring lock: {Name:mk8de15fecb3746b0f76783348b43f47c8853056 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335851  832406 start.go:365] acquiring machines lock for stopped-upgrade-713444: {Name:mkfc1d5fae7241d4adb9daf943930e3977654cae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335863  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1109 22:23:22.335868  832406 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 31.245µs
	I1109 22:23:22.335875  832406 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1109 22:23:22.335883  832406 cache.go:107] acquiring lock: {Name:mk8754c3c41de64f33f6d1748d623edc176abb20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335891  832406 start.go:369] acquired machines lock for "stopped-upgrade-713444" in 26.043µs
	I1109 22:23:22.335908  832406 start.go:96] Skipping create...Using existing machine configuration
	I1109 22:23:22.335920  832406 fix.go:54] fixHost starting: 
	I1109 22:23:22.335921  832406 cache.go:107] acquiring lock: {Name:mk64c077fdde984c231a6bd4c100c4507daece68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335949  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1109 22:23:22.335954  832406 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 37.916µs
	I1109 22:23:22.335960  832406 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1109 22:23:22.335968  832406 cache.go:107] acquiring lock: {Name:mkb7fbb50808c8f6b1f3a6ba92fa44165f339dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.335993  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1109 22:23:22.335997  832406 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 29.776µs
	I1109 22:23:22.336003  832406 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1109 22:23:22.336011  832406 cache.go:107] acquiring lock: {Name:mk8cf6b5fdce6cfda35ea920ea59cac04b0c118e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.336040  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1109 22:23:22.336045  832406 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 35.422µs
	I1109 22:23:22.336051  832406 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1109 22:23:22.336071  832406 cache.go:107] acquiring lock: {Name:mkc21f29362cbbf14e9c030c0d3863baf44f442e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 22:23:22.336100  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1109 22:23:22.336105  832406 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 35.117µs
	I1109 22:23:22.336111  832406 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1109 22:23:22.336176  832406 cli_runner.go:164] Run: docker container inspect stopped-upgrade-713444 --format={{.State.Status}}
	I1109 22:23:22.335909  832406 cache.go:115] /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1109 22:23:22.336217  832406 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 332.896µs
	I1109 22:23:22.336226  832406 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1109 22:23:22.336231  832406 cache.go:87] Successfully saved all images to host disk.
	I1109 22:23:22.353768  832406 fix.go:102] recreateIfNeeded on stopped-upgrade-713444: state=Stopped err=<nil>
	W1109 22:23:22.353796  832406 fix.go:128] unexpected machine state, will restart: <nil>
	I1109 22:23:22.357674  832406 out.go:177] * Restarting existing docker container for "stopped-upgrade-713444" ...
	I1109 22:23:22.360022  832406 cli_runner.go:164] Run: docker start stopped-upgrade-713444
	I1109 22:23:22.698972  832406 cli_runner.go:164] Run: docker container inspect stopped-upgrade-713444 --format={{.State.Status}}
	I1109 22:23:22.728987  832406 kic.go:430] container "stopped-upgrade-713444" state is running.
	I1109 22:23:22.729370  832406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-713444
	I1109 22:23:22.756145  832406 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/stopped-upgrade-713444/config.json ...
	I1109 22:23:22.756360  832406 machine.go:88] provisioning docker machine ...
	I1109 22:23:22.756381  832406 ubuntu.go:169] provisioning hostname "stopped-upgrade-713444"
	I1109 22:23:22.756431  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:22.776085  832406 main.go:141] libmachine: Using SSH client type: native
	I1109 22:23:22.776508  832406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1109 22:23:22.776527  832406 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-713444 && echo "stopped-upgrade-713444" | sudo tee /etc/hostname
	I1109 22:23:22.777099  832406 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46556->127.0.0.1:33857: read: connection reset by peer
	I1109 22:23:25.935294  832406 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-713444
	
	I1109 22:23:25.935382  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:25.955104  832406 main.go:141] libmachine: Using SSH client type: native
	I1109 22:23:25.955523  832406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1109 22:23:25.955546  832406 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-713444' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-713444/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-713444' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 22:23:26.099246  832406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1109 22:23:26.099273  832406 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17565-708188/.minikube CaCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17565-708188/.minikube}
	I1109 22:23:26.099308  832406 ubuntu.go:177] setting up certificates
	I1109 22:23:26.099323  832406 provision.go:83] configureAuth start
	I1109 22:23:26.099386  832406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-713444
	I1109 22:23:26.118260  832406 provision.go:138] copyHostCerts
	I1109 22:23:26.118484  832406 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem, removing ...
	I1109 22:23:26.118500  832406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem
	I1109 22:23:26.118580  832406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/ca.pem (1078 bytes)
	I1109 22:23:26.118686  832406 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem, removing ...
	I1109 22:23:26.118696  832406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem
	I1109 22:23:26.118723  832406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/cert.pem (1123 bytes)
	I1109 22:23:26.118781  832406 exec_runner.go:144] found /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem, removing ...
	I1109 22:23:26.118789  832406 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem
	I1109 22:23:26.118817  832406 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17565-708188/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17565-708188/.minikube/key.pem (1679 bytes)
	I1109 22:23:26.118867  832406 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-713444 san=[192.168.59.229 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-713444]
	I1109 22:23:26.872573  832406 provision.go:172] copyRemoteCerts
	I1109 22:23:26.872642  832406 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 22:23:26.872686  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:26.891287  832406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/stopped-upgrade-713444/id_rsa Username:docker}
	I1109 22:23:26.991441  832406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1109 22:23:27.027098  832406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 22:23:27.052526  832406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 22:23:27.078473  832406 provision.go:86] duration metric: configureAuth took 979.135586ms
	I1109 22:23:27.078503  832406 ubuntu.go:193] setting minikube options for container-runtime
	I1109 22:23:27.078688  832406 config.go:182] Loaded profile config "stopped-upgrade-713444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1109 22:23:27.078802  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:27.108049  832406 main.go:141] libmachine: Using SSH client type: native
	I1109 22:23:27.108465  832406 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 33857 <nil> <nil>}
	I1109 22:23:27.108485  832406 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 22:23:27.590888  832406 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 22:23:27.590918  832406 machine.go:91] provisioned docker machine in 4.834534866s
	I1109 22:23:27.590930  832406 start.go:300] post-start starting for "stopped-upgrade-713444" (driver="docker")
	I1109 22:23:27.590942  832406 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 22:23:27.591012  832406 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 22:23:27.591050  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:27.627788  832406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/stopped-upgrade-713444/id_rsa Username:docker}
	I1109 22:23:27.735819  832406 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 22:23:27.740345  832406 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 22:23:27.740367  832406 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 22:23:27.740377  832406 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 22:23:27.740384  832406 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1109 22:23:27.740393  832406 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/addons for local assets ...
	I1109 22:23:27.740450  832406 filesync.go:126] Scanning /home/jenkins/minikube-integration/17565-708188/.minikube/files for local assets ...
	I1109 22:23:27.740525  832406 filesync.go:149] local asset: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem -> 7135732.pem in /etc/ssl/certs
	I1109 22:23:27.740634  832406 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 22:23:27.750083  832406 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/ssl/certs/7135732.pem --> /etc/ssl/certs/7135732.pem (1708 bytes)
	I1109 22:23:27.773077  832406 start.go:303] post-start completed in 182.131096ms
	I1109 22:23:27.773153  832406 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:23:27.773194  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:27.790831  832406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/stopped-upgrade-713444/id_rsa Username:docker}
	I1109 22:23:27.887954  832406 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 22:23:27.893090  832406 fix.go:56] fixHost completed within 5.55716969s
	I1109 22:23:27.893113  832406 start.go:83] releasing machines lock for "stopped-upgrade-713444", held for 5.557213266s
	I1109 22:23:27.893189  832406 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-713444
	I1109 22:23:27.910532  832406 ssh_runner.go:195] Run: cat /version.json
	I1109 22:23:27.910587  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:27.910659  832406 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 22:23:27.910711  832406 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-713444
	I1109 22:23:27.933427  832406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/stopped-upgrade-713444/id_rsa Username:docker}
	I1109 22:23:27.947085  832406 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33857 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/stopped-upgrade-713444/id_rsa Username:docker}
	W1109 22:23:28.030497  832406 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1109 22:23:28.030621  832406 ssh_runner.go:195] Run: systemctl --version
	I1109 22:23:28.106551  832406 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 22:23:28.220494  832406 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1109 22:23:28.225892  832406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:23:28.249767  832406 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1109 22:23:28.249841  832406 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 22:23:28.275523  832406 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1109 22:23:28.275547  832406 start.go:472] detecting cgroup driver to use...
	I1109 22:23:28.275577  832406 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1109 22:23:28.275628  832406 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 22:23:28.305155  832406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 22:23:28.320521  832406 docker.go:203] disabling cri-docker service (if available) ...
	I1109 22:23:28.320594  832406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 22:23:28.332081  832406 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 22:23:28.343250  832406 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1109 22:23:28.354984  832406 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1109 22:23:28.355044  832406 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 22:23:28.458534  832406 docker.go:219] disabling docker service ...
	I1109 22:23:28.458613  832406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 22:23:28.471832  832406 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 22:23:28.484131  832406 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 22:23:28.592288  832406 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 22:23:28.697801  832406 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 22:23:28.710032  832406 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 22:23:28.726696  832406 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1109 22:23:28.726793  832406 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 22:23:28.740363  832406 out.go:177] 
	W1109 22:23:28.742608  832406 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1109 22:23:28.742634  832406 out.go:239] * 
	* 
	W1109 22:23:28.743543  832406 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 22:23:28.746309  832406 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-713444 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (89.28s)

                                                
                                    

Test pass (268/307)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.91
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.32
10 TestDownloadOnly/v1.28.3/json-events 10.32
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.61
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
25 TestAddons/Setup 176.77
27 TestAddons/parallel/Registry 14.59
29 TestAddons/parallel/InspektorGadget 10.84
30 TestAddons/parallel/MetricsServer 5.88
33 TestAddons/parallel/CSI 56.41
34 TestAddons/parallel/Headlamp 12.24
35 TestAddons/parallel/CloudSpanner 5.67
36 TestAddons/parallel/LocalPath 53.66
37 TestAddons/parallel/NvidiaDevicePlugin 5.65
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/StoppedEnableDisable 12.45
42 TestCertOptions 37.85
43 TestCertExpiration 254.82
45 TestForceSystemdFlag 40.25
46 TestForceSystemdEnv 44.27
52 TestErrorSpam/setup 32.95
53 TestErrorSpam/start 0.86
54 TestErrorSpam/status 1.18
55 TestErrorSpam/pause 1.91
56 TestErrorSpam/unpause 2.05
57 TestErrorSpam/stop 1.48
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 76.18
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 41.96
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.1
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.78
69 TestFunctional/serial/CacheCmd/cache/add_local 1.09
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.19
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
75 TestFunctional/serial/MinikubeKubectlCmd 0.2
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
77 TestFunctional/serial/ExtraConfig 35.88
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.87
80 TestFunctional/serial/LogsFileCmd 1.85
81 TestFunctional/serial/InvalidService 4.73
83 TestFunctional/parallel/ConfigCmd 0.61
84 TestFunctional/parallel/DashboardCmd 33.26
85 TestFunctional/parallel/DryRun 0.5
86 TestFunctional/parallel/InternationalLanguage 0.21
87 TestFunctional/parallel/StatusCmd 1.14
91 TestFunctional/parallel/ServiceCmdConnect 18.67
92 TestFunctional/parallel/AddonsCmd 0.18
95 TestFunctional/parallel/SSHCmd 0.77
96 TestFunctional/parallel/CpCmd 1.68
98 TestFunctional/parallel/FileSync 0.33
99 TestFunctional/parallel/CertSync 1.92
103 TestFunctional/parallel/NodeLabels 0.1
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
107 TestFunctional/parallel/License 0.37
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
113 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
114 TestFunctional/parallel/ServiceCmd/List 0.55
115 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
116 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
117 TestFunctional/parallel/ServiceCmd/Format 0.44
118 TestFunctional/parallel/ServiceCmd/URL 0.43
119 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
120 TestFunctional/parallel/ProfileCmd/profile_list 0.44
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
122 TestFunctional/parallel/MountCmd/any-port 48.21
124 TestFunctional/parallel/MountCmd/specific-port 2.02
125 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
126 TestFunctional/parallel/Version/short 0.08
127 TestFunctional/parallel/Version/components 0.86
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
132 TestFunctional/parallel/ImageCommands/ImageBuild 2.86
133 TestFunctional/parallel/ImageCommands/Setup 1.74
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.39
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.98
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.96
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.99
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.33
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/delete_addon-resizer_images 0.09
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
154 TestIngressAddonLegacy/StartLegacyK8sCluster 86.17
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
161 TestJSONOutput/start/Command 75.02
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 0.81
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 0.75
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 5.89
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 0.27
186 TestKicCustomNetwork/create_custom_network 46.66
187 TestKicCustomNetwork/use_default_bridge_network 34.78
188 TestKicExistingNetwork 36.39
189 TestKicCustomSubnet 36.4
190 TestKicStaticIP 34.09
191 TestMainNoArgs 0.07
192 TestMinikubeProfile 69.88
195 TestMountStart/serial/StartWithMountFirst 6.94
196 TestMountStart/serial/VerifyMountFirst 0.28
197 TestMountStart/serial/StartWithMountSecond 9.24
198 TestMountStart/serial/VerifyMountSecond 0.31
199 TestMountStart/serial/DeleteFirst 1.68
200 TestMountStart/serial/VerifyMountPostDelete 0.32
201 TestMountStart/serial/Stop 1.26
202 TestMountStart/serial/RestartStopped 8.22
203 TestMountStart/serial/VerifyMountPostStop 0.29
206 TestMultiNode/serial/FreshStart2Nodes 135.79
207 TestMultiNode/serial/DeployApp2Nodes 5.42
209 TestMultiNode/serial/AddNode 48.52
210 TestMultiNode/serial/ProfileList 0.38
211 TestMultiNode/serial/CopyFile 11.35
212 TestMultiNode/serial/StopNode 2.41
213 TestMultiNode/serial/StartAfterStop 12.65
214 TestMultiNode/serial/RestartKeepsNodes 123.07
215 TestMultiNode/serial/DeleteNode 5.14
216 TestMultiNode/serial/StopMultiNode 24.11
217 TestMultiNode/serial/RestartMultiNode 77.98
218 TestMultiNode/serial/ValidateNameConflict 35.36
223 TestPreload 164.54
225 TestScheduledStopUnix 108.45
228 TestInsufficientStorage 11.27
231 TestKubernetesUpgrade 395.04
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
235 TestNoKubernetes/serial/StartWithK8s 45.05
236 TestNoKubernetes/serial/StartWithStopK8s 8.39
237 TestNoKubernetes/serial/Start 10.84
238 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
239 TestNoKubernetes/serial/ProfileList 1.1
240 TestNoKubernetes/serial/Stop 1.31
241 TestNoKubernetes/serial/StartNoArgs 7.51
242 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.57
243 TestStoppedBinaryUpgrade/Setup 1.11
245 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
254 TestPause/serial/Start 57.56
255 TestPause/serial/SecondStartNoReconfiguration 29.73
256 TestPause/serial/Pause 1.23
257 TestPause/serial/VerifyStatus 0.47
258 TestPause/serial/Unpause 0.92
259 TestPause/serial/PauseAgain 1.08
260 TestPause/serial/DeletePaused 3.4
261 TestPause/serial/VerifyDeletedResources 0.54
269 TestNetworkPlugins/group/false 5.83
274 TestStartStop/group/old-k8s-version/serial/FirstStart 137.9
275 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
277 TestStartStop/group/old-k8s-version/serial/Stop 12.14
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
279 TestStartStop/group/old-k8s-version/serial/SecondStart 438.73
281 TestStartStop/group/no-preload/serial/FirstStart 70.29
282 TestStartStop/group/no-preload/serial/DeployApp 8.48
283 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
284 TestStartStop/group/no-preload/serial/Stop 12.05
285 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
286 TestStartStop/group/no-preload/serial/SecondStart 361.36
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.5
290 TestStartStop/group/old-k8s-version/serial/Pause 4.46
292 TestStartStop/group/embed-certs/serial/FirstStart 85.07
293 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.04
294 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.19
295 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.5
296 TestStartStop/group/no-preload/serial/Pause 4.36
298 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.67
299 TestStartStop/group/embed-certs/serial/DeployApp 9.63
300 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
301 TestStartStop/group/embed-certs/serial/Stop 12.08
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
303 TestStartStop/group/embed-certs/serial/SecondStart 618.6
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.57
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.75
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.43
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.72
309 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.03
310 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
311 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
312 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.51
314 TestStartStop/group/newest-cni/serial/FirstStart 46.52
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
317 TestStartStop/group/newest-cni/serial/Stop 1.33
318 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
319 TestStartStop/group/newest-cni/serial/SecondStart 30.3
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
323 TestStartStop/group/newest-cni/serial/Pause 3.29
324 TestNetworkPlugins/group/auto/Start 47.5
325 TestNetworkPlugins/group/auto/KubeletFlags 0.37
326 TestNetworkPlugins/group/auto/NetCatPod 10.37
327 TestNetworkPlugins/group/auto/DNS 0.23
328 TestNetworkPlugins/group/auto/Localhost 0.22
329 TestNetworkPlugins/group/auto/HairPin 0.21
330 TestNetworkPlugins/group/kindnet/Start 80.33
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.04
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
334 TestStartStop/group/embed-certs/serial/Pause 3.53
335 TestNetworkPlugins/group/calico/Start 79.17
336 TestNetworkPlugins/group/kindnet/ControllerPod 5.05
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
338 TestNetworkPlugins/group/kindnet/NetCatPod 12.43
339 TestNetworkPlugins/group/kindnet/DNS 0.24
340 TestNetworkPlugins/group/kindnet/Localhost 0.2
341 TestNetworkPlugins/group/kindnet/HairPin 0.23
342 TestNetworkPlugins/group/custom-flannel/Start 69.54
343 TestNetworkPlugins/group/calico/ControllerPod 5.05
344 TestNetworkPlugins/group/calico/KubeletFlags 0.44
345 TestNetworkPlugins/group/calico/NetCatPod 12.43
346 TestNetworkPlugins/group/calico/DNS 0.23
347 TestNetworkPlugins/group/calico/Localhost 0.21
348 TestNetworkPlugins/group/calico/HairPin 0.24
349 TestNetworkPlugins/group/enable-default-cni/Start 89.57
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.32
352 TestNetworkPlugins/group/custom-flannel/DNS 0.31
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.24
355 TestNetworkPlugins/group/flannel/Start 68.91
356 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
357 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.43
358 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
359 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
360 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
361 TestNetworkPlugins/group/flannel/ControllerPod 5.06
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.47
363 TestNetworkPlugins/group/flannel/NetCatPod 11.5
364 TestNetworkPlugins/group/bridge/Start 46.89
365 TestNetworkPlugins/group/flannel/DNS 0.28
366 TestNetworkPlugins/group/flannel/Localhost 0.21
367 TestNetworkPlugins/group/flannel/HairPin 0.23
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
369 TestNetworkPlugins/group/bridge/NetCatPod 11.33
370 TestNetworkPlugins/group/bridge/DNS 32.88
371 TestNetworkPlugins/group/bridge/Localhost 0.18
372 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (11.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-530486 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-530486 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.910041172s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-530486
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-530486: exit status 85 (322.745837ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-530486 | jenkins | v1.32.0 | 09 Nov 23 21:27 UTC |          |
	|         | -p download-only-530486        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:27:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:27:55.381376  713578 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:27:55.381612  713578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:27:55.381636  713578 out.go:309] Setting ErrFile to fd 2...
	I1109 21:27:55.381654  713578 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:27:55.381937  713578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	W1109 21:27:55.382120  713578 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17565-708188/.minikube/config/config.json: open /home/jenkins/minikube-integration/17565-708188/.minikube/config/config.json: no such file or directory
	I1109 21:27:55.382603  713578 out.go:303] Setting JSON to true
	I1109 21:27:55.383811  713578 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15025,"bootTime":1699550250,"procs":468,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:27:55.383913  713578 start.go:138] virtualization:  
	I1109 21:27:55.387049  713578 out.go:97] [download-only-530486] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:27:55.389354  713578 out.go:169] MINIKUBE_LOCATION=17565
	W1109 21:27:55.387295  713578 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball: no such file or directory
	I1109 21:27:55.387345  713578 notify.go:220] Checking for updates...
	I1109 21:27:55.392057  713578 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:27:55.393823  713578 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:27:55.395552  713578 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:27:55.397226  713578 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1109 21:27:55.401206  713578 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 21:27:55.401474  713578 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:27:55.424033  713578 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:27:55.424109  713578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:27:55.509441  713578 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-09 21:27:55.499933374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:27:55.509561  713578 docker.go:295] overlay module found
	I1109 21:27:55.511743  713578 out.go:97] Using the docker driver based on user configuration
	I1109 21:27:55.511767  713578 start.go:298] selected driver: docker
	I1109 21:27:55.511772  713578 start.go:902] validating driver "docker" against <nil>
	I1109 21:27:55.511877  713578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:27:55.577353  713578 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-09 21:27:55.568301982 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:27:55.577520  713578 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1109 21:27:55.577805  713578 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1109 21:27:55.577958  713578 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 21:27:55.580069  713578 out.go:169] Using Docker driver with root privileges
	I1109 21:27:55.582068  713578 cni.go:84] Creating CNI manager for ""
	I1109 21:27:55.582088  713578 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:27:55.582099  713578 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 21:27:55.582109  713578 start_flags.go:323] config:
	{Name:download-only-530486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-530486 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:27:55.584273  713578 out.go:97] Starting control plane node download-only-530486 in cluster download-only-530486
	I1109 21:27:55.584293  713578 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:27:55.586276  713578 out.go:97] Pulling base image ...
	I1109 21:27:55.586301  713578 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1109 21:27:55.586468  713578 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:27:55.602934  713578 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1109 21:27:55.603124  713578 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1109 21:27:55.603220  713578 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1109 21:27:55.677517  713578 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1109 21:27:55.677548  713578 cache.go:56] Caching tarball of preloaded images
	I1109 21:27:55.678157  713578 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1109 21:27:55.680631  713578 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1109 21:27:55.680658  713578 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:27:55.794811  713578 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1109 21:28:00.754246  713578 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-530486"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (10.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-530486 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-530486 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.322602695s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (10.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-530486
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-530486: exit status 85 (90.569321ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-530486 | jenkins | v1.32.0 | 09 Nov 23 21:27 UTC |          |
	|         | -p download-only-530486        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-530486 | jenkins | v1.32.0 | 09 Nov 23 21:28 UTC |          |
	|         | -p download-only-530486        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/09 21:28:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 21:28:07.622866  713651 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:28:07.623007  713651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:28:07.623017  713651 out.go:309] Setting ErrFile to fd 2...
	I1109 21:28:07.623023  713651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:28:07.623281  713651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	W1109 21:28:07.623425  713651 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17565-708188/.minikube/config/config.json: open /home/jenkins/minikube-integration/17565-708188/.minikube/config/config.json: no such file or directory
	I1109 21:28:07.623649  713651 out.go:303] Setting JSON to true
	I1109 21:28:07.624761  713651 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":15037,"bootTime":1699550250,"procs":465,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:28:07.624830  713651 start.go:138] virtualization:  
	I1109 21:28:07.645589  713651 out.go:97] [download-only-530486] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:28:07.677571  713651 out.go:169] MINIKUBE_LOCATION=17565
	I1109 21:28:07.645899  713651 notify.go:220] Checking for updates...
	I1109 21:28:07.725524  713651 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:28:07.759288  713651 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:28:07.791393  713651 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:28:07.821751  713651 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1109 21:28:07.886792  713651 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 21:28:07.887376  713651 config.go:182] Loaded profile config "download-only-530486": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1109 21:28:07.887457  713651 start.go:810] api.Load failed for download-only-530486: filestore "download-only-530486": Docker machine "download-only-530486" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1109 21:28:07.887578  713651 driver.go:378] Setting default libvirt URI to qemu:///system
	W1109 21:28:07.887605  713651 start.go:810] api.Load failed for download-only-530486: filestore "download-only-530486": Docker machine "download-only-530486" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1109 21:28:07.911909  713651 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:28:07.911997  713651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:28:07.987991  713651 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-09 21:28:07.977750566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:28:07.988120  713651 docker.go:295] overlay module found
	I1109 21:28:08.031125  713651 out.go:97] Using the docker driver based on existing profile
	I1109 21:28:08.031163  713651 start.go:298] selected driver: docker
	I1109 21:28:08.031170  713651 start.go:902] validating driver "docker" against &{Name:download-only-530486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-530486 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:28:08.031402  713651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:28:08.109258  713651 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-09 21:28:08.099017896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:28:08.109709  713651 cni.go:84] Creating CNI manager for ""
	I1109 21:28:08.109730  713651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 21:28:08.109743  713651 start_flags.go:323] config:
	{Name:download-only-530486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-530486 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1109 21:28:08.126687  713651 out.go:97] Starting control plane node download-only-530486 in cluster download-only-530486
	I1109 21:28:08.126728  713651 cache.go:121] Beginning downloading kic base image for docker with crio
	I1109 21:28:08.158506  713651 out.go:97] Pulling base image ...
	I1109 21:28:08.158542  713651 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:08.158722  713651 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1109 21:28:08.178892  713651 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1109 21:28:08.179066  713651 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1109 21:28:08.179089  713651 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory, skipping pull
	I1109 21:28:08.179094  713651 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in cache, skipping pull
	I1109 21:28:08.179110  713651 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	I1109 21:28:08.227861  713651 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1109 21:28:08.227887  713651 cache.go:56] Caching tarball of preloaded images
	I1109 21:28:08.228048  713651 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:08.254402  713651 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1109 21:28:08.254436  713651 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:28:08.379135  713651 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1109 21:28:16.207610  713651 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:28:16.207746  713651 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17565-708188/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1109 21:28:17.127351  713651 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1109 21:28:17.127501  713651 profile.go:148] Saving config to /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/download-only-530486/config.json ...
	I1109 21:28:17.127723  713651 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1109 21:28:17.127929  713651 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17565-708188/.minikube/cache/linux/arm64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-530486"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-530486
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-333375 --alsologtostderr --binary-mirror http://127.0.0.1:33375 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-333375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-333375
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-386274
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-386274: exit status 85 (86.15978ms)

                                                
                                                
-- stdout --
	* Profile "addons-386274" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386274"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-386274
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-386274: exit status 85 (91.504544ms)

                                                
                                                
-- stdout --
	* Profile "addons-386274" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-386274"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (176.77s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-386274 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-386274 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m56.768908898s)
--- PASS: TestAddons/Setup (176.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 65.414852ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-qm6sx" [a3bbea32-b042-4884-87b5-f93606dc9a25] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016167478s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vg47b" [d3dba9fa-267c-4a65-9efd-566fb91fc9e2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016985206s
addons_test.go:339: (dbg) Run:  kubectl --context addons-386274 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-386274 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-386274 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.398274844s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 ip
2023/11/09 21:31:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g5rl5" [1dd41777-e02b-412a-8ac5-544eb1d0140c] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013860615s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-386274
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-386274: (5.829834655s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.723094ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-654wx" [439ff363-c043-404a-af5d-eef3139e8db8] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012697151s
addons_test.go:414: (dbg) Run:  kubectl --context addons-386274 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 5.089391ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-386274 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-386274 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1a1d2364-f2a1-4465-b640-bb96f1dadb8a] Pending
helpers_test.go:344: "task-pv-pod" [1a1d2364-f2a1-4465-b640-bb96f1dadb8a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1a1d2364-f2a1-4465-b640-bb96f1dadb8a] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.018243657s
addons_test.go:583: (dbg) Run:  kubectl --context addons-386274 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-386274 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-386274 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-386274 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-386274 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-386274 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-386274 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [381ac91b-db2f-4888-bef2-86bff6c46be0] Pending
helpers_test.go:344: "task-pv-pod-restore" [381ac91b-db2f-4888-bef2-86bff6c46be0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [381ac91b-db2f-4888-bef2-86bff6c46be0] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.012624753s
addons_test.go:625: (dbg) Run:  kubectl --context addons-386274 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-386274 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-386274 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-386274 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.856050334s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-386274 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-386274 --alsologtostderr -v=1: (1.205319255s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-qwvjk" [57c19d37-ca7f-49d6-99c8-d04177a57cec] Pending
helpers_test.go:344: "headlamp-777fd4b855-qwvjk" [57c19d37-ca7f-49d6-99c8-d04177a57cec] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-qwvjk" [57c19d37-ca7f-49d6-99c8-d04177a57cec] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.029465491s
--- PASS: TestAddons/parallel/Headlamp (12.24s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-j66zt" [16d88aa2-f442-4a47-9243-01219f44a4f5] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016279132s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-386274
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.66s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-386274 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-386274 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [40199c80-1873-476c-9473-03b99a8015ab] Pending
helpers_test.go:344: "test-local-path" [40199c80-1873-476c-9473-03b99a8015ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [40199c80-1873-476c-9473-03b99a8015ab] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [40199c80-1873-476c-9473-03b99a8015ab] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.012059551s
addons_test.go:890: (dbg) Run:  kubectl --context addons-386274 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 ssh "cat /opt/local-path-provisioner/pvc-2b37186f-28e0-4c99-bc25-8fa1ced967d3_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-386274 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-386274 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-386274 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-386274 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.474207059s)
--- PASS: TestAddons/parallel/LocalPath (53.66s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9nwzl" [5ce43e7e-9d07-4445-80dc-feaf3384dccb] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.076234618s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-386274
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-386274 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-386274 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-386274
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-386274: (12.124393486s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-386274
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-386274
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-386274
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (37.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-639346 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1109 22:27:53.772377  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-639346 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.063190868s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-639346 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-639346 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-639346 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-639346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-639346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-639346: (2.076306457s)
--- PASS: TestCertOptions (37.85s)

                                                
                                    
x
+
TestCertExpiration (254.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-054698 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-054698 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.112803105s)
E1109 22:28:15.586374  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-054698 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1109 22:30:59.784495  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-054698 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.62183957s)
helpers_test.go:175: Cleaning up "cert-expiration-054698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-054698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-054698: (2.079652196s)
--- PASS: TestCertExpiration (254.82s)

                                                
                                    
x
+
TestForceSystemdFlag (40.25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-227562 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-227562 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.304485492s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-227562 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-227562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-227562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-227562: (2.561576521s)
--- PASS: TestForceSystemdFlag (40.25s)

                                                
                                    
x
+
TestForceSystemdEnv (44.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-985695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-985695 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.515491966s)
helpers_test.go:175: Cleaning up "force-systemd-env-985695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-985695
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-985695: (2.757944349s)
--- PASS: TestForceSystemdEnv (44.27s)

                                                
                                    
x
+
TestErrorSpam/setup (32.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-997070 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-997070 --driver=docker  --container-runtime=crio
E1109 21:41:16.647444  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.654580  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.664900  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.685174  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.725490  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.805810  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:16.966243  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:17.286898  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:18.017095  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:19.297607  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:21.858434  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:26.979405  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 21:41:37.219612  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-997070 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-997070 --driver=docker  --container-runtime=crio: (32.947706263s)
--- PASS: TestErrorSpam/setup (32.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 start --dry-run
--- PASS: TestErrorSpam/start (0.86s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 pause
--- PASS: TestErrorSpam/pause (1.91s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 unpause
--- PASS: TestErrorSpam/unpause (2.05s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 stop: (1.242412959s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997070 --log_dir /tmp/nospam-997070 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17565-708188/.minikube/files/etc/test/nested/copy/713573/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1109 21:42:38.660779  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-133528 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.183435507s)
--- PASS: TestFunctional/serial/StartWithProxy (76.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-133528 --alsologtostderr -v=8: (41.960845408s)
functional_test.go:659: soft start took 41.9613342s for "functional-133528" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-133528 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:3.1: (1.285859543s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:3.3: (1.249781439s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 cache add registry.k8s.io/pause:latest: (1.243845291s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-133528 /tmp/TestFunctionalserialCacheCmdcacheadd_local714583089/001
E1109 21:44:00.581688  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache add minikube-local-cache-test:functional-133528
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache delete minikube-local-cache-test:functional-133528
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-133528
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (333.379491ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 cache reload: (1.145775478s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 kubectl -- --context functional-133528 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-133528 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-133528 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.880294401s)
functional_test.go:757: restart took 35.880425174s for "functional-133528" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-133528 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 logs: (1.870079397s)
--- PASS: TestFunctional/serial/LogsCmd (1.87s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 logs --file /tmp/TestFunctionalserialLogsFileCmd187520990/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 logs --file /tmp/TestFunctionalserialLogsFileCmd187520990/001/logs.txt: (1.849171536s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.73s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-133528 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-133528
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-133528: exit status 115 (544.263919ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31951 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-133528 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 config get cpus: exit status 14 (111.151753ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 config get cpus: exit status 14 (120.662523ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (33.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-133528 --alsologtostderr -v=1]
2023/11/09 21:49:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-133528 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 740240: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (33.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-133528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (222.192659ms)

                                                
                                                
-- stdout --
	* [functional-133528] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 21:49:22.355836  740010 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:49:22.356054  740010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:49:22.356065  740010 out.go:309] Setting ErrFile to fd 2...
	I1109 21:49:22.356071  740010 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:49:22.356363  740010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:49:22.356692  740010 out.go:303] Setting JSON to false
	I1109 21:49:22.357592  740010 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16312,"bootTime":1699550250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:49:22.357659  740010 start.go:138] virtualization:  
	I1109 21:49:22.360458  740010 out.go:177] * [functional-133528] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 21:49:22.362967  740010 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:49:22.364845  740010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:49:22.363020  740010 notify.go:220] Checking for updates...
	I1109 21:49:22.368391  740010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:49:22.370444  740010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:49:22.372556  740010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:49:22.374378  740010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:49:22.376767  740010 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:49:22.377312  740010 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:49:22.401836  740010 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:49:22.401943  740010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:49:22.492560  740010 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-09 21:49:22.480685962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:49:22.492681  740010 docker.go:295] overlay module found
	I1109 21:49:22.496048  740010 out.go:177] * Using the docker driver based on existing profile
	I1109 21:49:22.498309  740010 start.go:298] selected driver: docker
	I1109 21:49:22.498344  740010 start.go:902] validating driver "docker" against &{Name:functional-133528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:49:22.498448  740010 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:49:22.501054  740010 out.go:177] 
	W1109 21:49:22.502996  740010 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 21:49:22.504658  740010 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-133528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-133528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (211.795617ms)

                                                
                                                
-- stdout --
	* [functional-133528] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 21:49:22.141521  739970 out.go:296] Setting OutFile to fd 1 ...
	I1109 21:49:22.141687  739970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:49:22.141699  739970 out.go:309] Setting ErrFile to fd 2...
	I1109 21:49:22.141705  739970 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 21:49:22.142061  739970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 21:49:22.142427  739970 out.go:303] Setting JSON to false
	I1109 21:49:22.143347  739970 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16312,"bootTime":1699550250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 21:49:22.143419  739970 start.go:138] virtualization:  
	I1109 21:49:22.146065  739970 out.go:177] * [functional-133528] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1109 21:49:22.148663  739970 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 21:49:22.150461  739970 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 21:49:22.148741  739970 notify.go:220] Checking for updates...
	I1109 21:49:22.153969  739970 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 21:49:22.155891  739970 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 21:49:22.157764  739970 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 21:49:22.159600  739970 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 21:49:22.161754  739970 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 21:49:22.162419  739970 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 21:49:22.185599  739970 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 21:49:22.185695  739970 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 21:49:22.269493  739970 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-09 21:49:22.259807739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 21:49:22.269610  739970 docker.go:295] overlay module found
	I1109 21:49:22.271796  739970 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1109 21:49:22.273775  739970 start.go:298] selected driver: docker
	I1109 21:49:22.273792  739970 start.go:902] validating driver "docker" against &{Name:functional-133528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-133528 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1109 21:49:22.273892  739970 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 21:49:22.276738  739970 out.go:177] 
	W1109 21:49:22.279031  739970 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 21:49:22.281229  739970 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-133528 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-133528 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-stcqw" [9d39a8db-0ba7-45b0-819f-4437eacc3238] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-stcqw" [9d39a8db-0ba7-45b0-819f-4437eacc3238] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.013156912s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30248
functional_test.go:1674: http://192.168.49.2:30248: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-stcqw

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30248
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh -n functional-133528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 cp functional-133528:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1643400076/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh -n functional-133528 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/713573/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /etc/test/nested/copy/713573/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/713573.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /etc/ssl/certs/713573.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/713573.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /usr/share/ca-certificates/713573.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7135732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /etc/ssl/certs/7135732.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7135732.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /usr/share/ca-certificates/7135732.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-133528 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "sudo systemctl is-active docker": exit status 1 (313.786972ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "sudo systemctl is-active containerd": exit status 1 (308.858948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 736705: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-133528 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-133528 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-l9wcq" [55bcd8a8-0038-4aa3-88e6-7b803f6a17df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-l9wcq" [55bcd8a8-0038-4aa3-88e6-7b803f6a17df] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.013444987s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service list -o json
functional_test.go:1493: Took "554.672741ms" to run "out/minikube-linux-arm64 -p functional-133528 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31112
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31112
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "357.901705ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "85.003027ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "348.958595ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "70.718267ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (48.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdany-port3819892161/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699566508606422839" to /tmp/TestFunctionalparallelMountCmdany-port3819892161/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699566508606422839" to /tmp/TestFunctionalparallelMountCmdany-port3819892161/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699566508606422839" to /tmp/TestFunctionalparallelMountCmdany-port3819892161/001/test-1699566508606422839
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (414.192994ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 21:48 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 21:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 21:48 test-1699566508606422839
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh cat /mount-9p/test-1699566508606422839
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-133528 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aeafaa15-e331-41d6-bd79-e38b33672e65] Pending
helpers_test.go:344: "busybox-mount" [aeafaa15-e331-41d6-bd79-e38b33672e65] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aeafaa15-e331-41d6-bd79-e38b33672e65] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aeafaa15-e331-41d6-bd79-e38b33672e65] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 45.016631619s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-133528 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdany-port3819892161/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (48.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdspecific-port3706166865/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (440.98141ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdspecific-port3706166865/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "sudo umount -f /mount-9p": exit status 1 (311.739902ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-133528 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdspecific-port3706166865/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T" /mount1: exit status 1 (688.23187ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-133528 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-133528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1813826644/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-133528 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-133528
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-133528 image ls --format short --alsologtostderr:
I1109 21:50:20.387464  741655 out.go:296] Setting OutFile to fd 1 ...
I1109 21:50:20.387589  741655 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:20.387597  741655 out.go:309] Setting ErrFile to fd 2...
I1109 21:50:20.387603  741655 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:20.387889  741655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
I1109 21:50:20.388538  741655 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:20.388681  741655 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:20.389185  741655 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
I1109 21:50:20.407980  741655 ssh_runner.go:195] Run: systemctl --version
I1109 21:50:20.408032  741655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
I1109 21:50:20.425627  741655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
I1109 21:50:20.528075  741655 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-133528 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/google-containers/addon-resizer  | functional-133528  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-proxy              | v1.28.3            | a5dd5cdd6d3ef | 69.9MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 42a4e73724daa | 59.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/my-image                      | functional-133528  | aca3c71992e12 | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 537e9a59ee2fd | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 8276439b4f237 | 117MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-133528 image ls --format table --alsologtostderr:
I1109 21:50:24.032390  741976 out.go:296] Setting OutFile to fd 1 ...
I1109 21:50:24.032630  741976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:24.032657  741976 out.go:309] Setting ErrFile to fd 2...
I1109 21:50:24.032677  741976 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:24.033294  741976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
I1109 21:50:24.033974  741976 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:24.034130  741976 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:24.034667  741976 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
I1109 21:50:24.062505  741976 ssh_runner.go:195] Run: systemctl --version
I1109 21:50:24.062573  741976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
I1109 21:50:24.084551  741976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
I1109 21:50:24.180132  741976 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-133528 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-133528"],"size":"34114467"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa","registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"121054158"},{
"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"aca3c71992e121107fa4a22cda36c72c8123e29e378523dde1d743a35b4f37d5","repoDigests":["localhost/my-image@sha256:6b28a89e2c3e05ac3ce00a4fe289335c9132a10fdef9d9825fedc5a226fbdd39"],"repoTags":["localhost/my-image:functional-133528"],"size":"1640226"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc5
50d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"117252916"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"59188020"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b
673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"c5ee9d0039dbe21f4aeacd98fc578a71b8a9bcf3e2409c80a51008ff8358fac0","repoDigests":["docker.io/library/2ff90b12c464b181c0f9c3a8607fea86b10f5a7f7465250e70ab55acb003b8ad-tmp@sha256:943df2ee9a49b839d204b25370b2a4a275c4d80c15385751a9dfa5608beff691"],"repoTags":[],"size":"1637644"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d2
8319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s
-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"69926807"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5
bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-133528 image ls --format json --alsologtostderr:
I1109 21:50:23.771204  741947 out.go:296] Setting OutFile to fd 1 ...
I1109 21:50:23.771429  741947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:23.771451  741947 out.go:309] Setting ErrFile to fd 2...
I1109 21:50:23.771471  741947 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:23.771747  741947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
I1109 21:50:23.772432  741947 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:23.772599  741947 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:23.773212  741947 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
I1109 21:50:23.792305  741947 ssh_runner.go:195] Run: systemctl --version
I1109 21:50:23.792366  741947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
I1109 21:50:23.812646  741947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
I1109 21:50:23.910626  741947 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-133528 image ls --format yaml --alsologtostderr:
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-133528
size: "34114467"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "121054158"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "69926807"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "59188020"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "117252916"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-133528 image ls --format yaml --alsologtostderr:
I1109 21:50:20.657128  741680 out.go:296] Setting OutFile to fd 1 ...
I1109 21:50:20.657317  741680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:20.657349  741680 out.go:309] Setting ErrFile to fd 2...
I1109 21:50:20.657372  741680 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:20.657640  741680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
I1109 21:50:20.658385  741680 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:20.658566  741680 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:20.659159  741680 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
I1109 21:50:20.676889  741680 ssh_runner.go:195] Run: systemctl --version
I1109 21:50:20.676942  741680 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
I1109 21:50:20.696696  741680 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
I1109 21:50:20.796138  741680 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-133528 ssh pgrep buildkitd: exit status 1 (313.629279ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image build -t localhost/my-image:functional-133528 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 image build -t localhost/my-image:functional-133528 testdata/build --alsologtostderr: (2.27144048s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-133528 image build -t localhost/my-image:functional-133528 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c5ee9d0039d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-133528
--> aca3c71992e
Successfully tagged localhost/my-image:functional-133528
aca3c71992e121107fa4a22cda36c72c8123e29e378523dde1d743a35b4f37d5
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-133528 image build -t localhost/my-image:functional-133528 testdata/build --alsologtostderr:
I1109 21:50:21.228922  741757 out.go:296] Setting OutFile to fd 1 ...
I1109 21:50:21.229544  741757 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:21.229579  741757 out.go:309] Setting ErrFile to fd 2...
I1109 21:50:21.229599  741757 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1109 21:50:21.229898  741757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
I1109 21:50:21.230650  741757 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:21.231320  741757 config.go:182] Loaded profile config "functional-133528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1109 21:50:21.231907  741757 cli_runner.go:164] Run: docker container inspect functional-133528 --format={{.State.Status}}
I1109 21:50:21.251359  741757 ssh_runner.go:195] Run: systemctl --version
I1109 21:50:21.251423  741757 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-133528
I1109 21:50:21.270540  741757 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33685 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/functional-133528/id_rsa Username:docker}
I1109 21:50:21.368130  741757 build_images.go:151] Building image from path: /tmp/build.1478995971.tar
I1109 21:50:21.368214  741757 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1109 21:50:21.380635  741757 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1478995971.tar
I1109 21:50:21.386114  741757 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1478995971.tar: stat -c "%s %y" /var/lib/minikube/build/build.1478995971.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1478995971.tar': No such file or directory
I1109 21:50:21.386146  741757 ssh_runner.go:362] scp /tmp/build.1478995971.tar --> /var/lib/minikube/build/build.1478995971.tar (3072 bytes)
I1109 21:50:21.416079  741757 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1478995971
I1109 21:50:21.427476  741757 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1478995971 -xf /var/lib/minikube/build/build.1478995971.tar
I1109 21:50:21.438701  741757 crio.go:297] Building image: /var/lib/minikube/build/build.1478995971
I1109 21:50:21.438768  741757 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-133528 /var/lib/minikube/build/build.1478995971 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1109 21:50:23.399602  741757 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-133528 /var/lib/minikube/build/build.1478995971 --cgroup-manager=cgroupfs: (1.960812058s)
I1109 21:50:23.399682  741757 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1478995971
I1109 21:50:23.412005  741757 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1478995971.tar
I1109 21:50:23.423934  741757 build_images.go:207] Built localhost/my-image:functional-133528 from /tmp/build.1478995971.tar
I1109 21:50:23.423963  741757 build_images.go:123] succeeded building to: functional-133528
I1109 21:50:23.423968  741757 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.710012741s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-133528
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr: (4.122314576s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr: (2.714560465s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.036692121s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-133528
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 image load --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr: (3.641992223s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image save gcr.io/google-containers/addon-resizer:functional-133528 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image rm gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-133528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.063703182s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-133528
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 image save --daemon gcr.io/google-containers/addon-resizer:functional-133528 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-133528
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-133528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-133528 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-133528
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-133528
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-133528
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (86.17s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-861900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1109 21:51:16.647116  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-861900 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m26.173239722s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (86.17s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-861900 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-020000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1109 21:59:50.726983  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 22:00:18.410670  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-020000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.021898208s)
--- PASS: TestJSONOutput/start/Command (75.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-020000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-020000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-020000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-020000 --output=json --user=testUser: (5.887853029s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-664580 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-664580 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.450865ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c77309e5-68a9-42fd-9a5a-51e48a3c39cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664580] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1261001b-7a35-4eb7-97d1-967d25e9a459","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17565"}}
	{"specversion":"1.0","id":"5742d0e8-502a-4ec9-83d7-0959e55b78ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"247badc8-11c9-4645-b88d-e05f5f5897ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig"}}
	{"specversion":"1.0","id":"0b411cd2-89f7-40f5-a496-241a3e9bc30e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube"}}
	{"specversion":"1.0","id":"184d6063-f795-4253-8a1e-8c0f6d0185f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ae5a6762-8642-4540-98bb-90d9c3767d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"55da64d6-51b2-44cc-8829-edf9684b9723","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-664580
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-975884 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-975884 --network=: (44.461761384s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-975884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-975884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-975884: (2.178665711s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.66s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-698298 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-698298 --network=bridge: (32.815088697s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-698298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-698298
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-698298: (1.94493784s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.78s)

                                                
                                    
x
+
TestKicExistingNetwork (36.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-396739 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-396739 --network=existing-network: (34.283386846s)
helpers_test.go:175: Cleaning up "existing-network-396739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-396739
E1109 22:03:15.586425  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:15.592048  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:15.602258  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:15.622901  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:15.663122  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:15.743353  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-396739: (1.953367406s)
E1109 22:03:15.903858  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
--- PASS: TestKicExistingNetwork (36.39s)

                                                
                                    
x
+
TestKicCustomSubnet (36.4s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-567804 --subnet=192.168.60.0/24
E1109 22:03:16.224592  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:16.865234  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:18.145552  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:20.707279  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:25.828457  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:03:36.069498  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-567804 --subnet=192.168.60.0/24: (34.28322734s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-567804 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-567804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-567804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-567804: (2.094185703s)
--- PASS: TestKicCustomSubnet (36.40s)

                                                
                                    
x
+
TestKicStaticIP (34.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-898359 --static-ip=192.168.200.200
E1109 22:03:56.549730  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-898359 --static-ip=192.168.200.200: (31.727150596s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-898359 ip
helpers_test.go:175: Cleaning up "static-ip-898359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-898359
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-898359: (2.178627785s)
--- PASS: TestKicStaticIP (34.09s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-390888 --driver=docker  --container-runtime=crio
E1109 22:04:37.510918  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:04:50.727545  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-390888 --driver=docker  --container-runtime=crio: (31.735253207s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-393440 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-393440 --driver=docker  --container-runtime=crio: (32.845156002s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-390888
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-393440
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-393440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-393440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-393440: (2.006744583s)
helpers_test.go:175: Cleaning up "first-390888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-390888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-390888: (1.997903769s)
--- PASS: TestMinikubeProfile (69.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-974382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-974382 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.940907091s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-974382 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-976187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-976187 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.236780099s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-976187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-974382 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-974382 --alsologtostderr -v=5: (1.678087029s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-976187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-976187
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-976187: (1.254964164s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-976187
E1109 22:05:59.432525  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-976187: (7.216543654s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-976187 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-833232 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1109 22:06:16.647731  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 22:08:15.586424  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-833232 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.214571897s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-833232 -- rollout status deployment/busybox: (3.194831976s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-76fbj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-833232 -- exec busybox-5bc68d56bd-zwn9f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.42s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-833232 -v 3 --alsologtostderr
E1109 22:08:43.273400  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-833232 -v 3 --alsologtostderr: (47.772957078s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.52s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp testdata/cp-test.txt multinode-833232:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile404074367/001/cp-test_multinode-833232.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232:/home/docker/cp-test.txt multinode-833232-m02:/home/docker/cp-test_multinode-833232_multinode-833232-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test_multinode-833232_multinode-833232-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232:/home/docker/cp-test.txt multinode-833232-m03:/home/docker/cp-test_multinode-833232_multinode-833232-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test_multinode-833232_multinode-833232-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp testdata/cp-test.txt multinode-833232-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile404074367/001/cp-test_multinode-833232-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m02:/home/docker/cp-test.txt multinode-833232:/home/docker/cp-test_multinode-833232-m02_multinode-833232.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test_multinode-833232-m02_multinode-833232.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m02:/home/docker/cp-test.txt multinode-833232-m03:/home/docker/cp-test_multinode-833232-m02_multinode-833232-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test_multinode-833232-m02_multinode-833232-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp testdata/cp-test.txt multinode-833232-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile404074367/001/cp-test_multinode-833232-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m03:/home/docker/cp-test.txt multinode-833232:/home/docker/cp-test_multinode-833232-m03_multinode-833232.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232 "sudo cat /home/docker/cp-test_multinode-833232-m03_multinode-833232.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 cp multinode-833232-m03:/home/docker/cp-test.txt multinode-833232-m02:/home/docker/cp-test_multinode-833232-m03_multinode-833232-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 ssh -n multinode-833232-m02 "sudo cat /home/docker/cp-test_multinode-833232-m03_multinode-833232-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-833232 node stop m03: (1.228956252s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-833232 status: exit status 7 (597.539291ms)

                                                
                                                
-- stdout --
	multinode-833232
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-833232-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-833232-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr: exit status 7 (583.702279ms)

                                                
                                                
-- stdout --
	multinode-833232
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-833232-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-833232-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:09:34.552930  787590 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:09:34.553048  787590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:09:34.553054  787590 out.go:309] Setting ErrFile to fd 2...
	I1109 22:09:34.553060  787590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:09:34.553333  787590 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:09:34.553507  787590 out.go:303] Setting JSON to false
	I1109 22:09:34.553571  787590 mustload.go:65] Loading cluster: multinode-833232
	I1109 22:09:34.553658  787590 notify.go:220] Checking for updates...
	I1109 22:09:34.554042  787590 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:09:34.554053  787590 status.go:255] checking status of multinode-833232 ...
	I1109 22:09:34.554630  787590 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:09:34.575307  787590 status.go:330] multinode-833232 host status = "Running" (err=<nil>)
	I1109 22:09:34.575342  787590 host.go:66] Checking if "multinode-833232" exists ...
	I1109 22:09:34.575639  787590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232
	I1109 22:09:34.592958  787590 host.go:66] Checking if "multinode-833232" exists ...
	I1109 22:09:34.593439  787590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:09:34.593516  787590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232
	I1109 22:09:34.624350  787590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33750 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232/id_rsa Username:docker}
	I1109 22:09:34.721034  787590 ssh_runner.go:195] Run: systemctl --version
	I1109 22:09:34.726690  787590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:09:34.740991  787590 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:09:34.815653  787590 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-09 22:09:34.805306578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:09:34.816268  787590 kubeconfig.go:92] found "multinode-833232" server: "https://192.168.58.2:8443"
	I1109 22:09:34.816292  787590 api_server.go:166] Checking apiserver status ...
	I1109 22:09:34.816342  787590 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 22:09:34.829313  787590 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1279/cgroup
	I1109 22:09:34.840571  787590 api_server.go:182] apiserver freezer: "7:freezer:/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio/crio-5ba7f8692382c39266c99e281835c4c67f4e4306e5cf4bc670545bd6a298a3ff"
	I1109 22:09:34.840662  787590 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bc2ae93f7ba616c3d22109d7f85136aeece0d17aa7e28ac5210220c9639cc6c6/crio/crio-5ba7f8692382c39266c99e281835c4c67f4e4306e5cf4bc670545bd6a298a3ff/freezer.state
	I1109 22:09:34.850992  787590 api_server.go:204] freezer state: "THAWED"
	I1109 22:09:34.851019  787590 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1109 22:09:34.859833  787590 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1109 22:09:34.859863  787590 status.go:421] multinode-833232 apiserver status = Running (err=<nil>)
	I1109 22:09:34.859875  787590 status.go:257] multinode-833232 status: &{Name:multinode-833232 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 22:09:34.859916  787590 status.go:255] checking status of multinode-833232-m02 ...
	I1109 22:09:34.860251  787590 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Status}}
	I1109 22:09:34.879657  787590 status.go:330] multinode-833232-m02 host status = "Running" (err=<nil>)
	I1109 22:09:34.879685  787590 host.go:66] Checking if "multinode-833232-m02" exists ...
	I1109 22:09:34.880028  787590 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-833232-m02
	I1109 22:09:34.897625  787590 host.go:66] Checking if "multinode-833232-m02" exists ...
	I1109 22:09:34.897946  787590 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 22:09:34.897990  787590 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-833232-m02
	I1109 22:09:34.916279  787590 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33755 SSHKeyPath:/home/jenkins/minikube-integration/17565-708188/.minikube/machines/multinode-833232-m02/id_rsa Username:docker}
	I1109 22:09:35.017630  787590 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 22:09:35.032320  787590 status.go:257] multinode-833232-m02 status: &{Name:multinode-833232-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 22:09:35.032354  787590 status.go:255] checking status of multinode-833232-m03 ...
	I1109 22:09:35.032680  787590 cli_runner.go:164] Run: docker container inspect multinode-833232-m03 --format={{.State.Status}}
	I1109 22:09:35.052223  787590 status.go:330] multinode-833232-m03 host status = "Stopped" (err=<nil>)
	I1109 22:09:35.052254  787590 status.go:343] host is not running, skipping remaining checks
	I1109 22:09:35.052262  787590 status.go:257] multinode-833232-m03 status: &{Name:multinode-833232-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-833232 node start m03 --alsologtostderr: (11.799062897s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-833232
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-833232
E1109 22:09:50.727539  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-833232: (24.949285194s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-833232 --wait=true -v=8 --alsologtostderr
E1109 22:11:13.771403  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 22:11:16.646939  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-833232 --wait=true -v=8 --alsologtostderr: (1m37.946180139s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-833232
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-833232 node delete m03: (4.347718778s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-833232 stop: (23.892779429s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-833232 status: exit status 7 (112.731099ms)

                                                
                                                
-- stdout --
	multinode-833232
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-833232-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr: exit status 7 (105.472477ms)

                                                
                                                
-- stdout --
	multinode-833232
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-833232-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:12:19.987434  795758 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:12:19.987613  795758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:12:19.987625  795758 out.go:309] Setting ErrFile to fd 2...
	I1109 22:12:19.987631  795758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:12:19.987923  795758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:12:19.988100  795758 out.go:303] Setting JSON to false
	I1109 22:12:19.988136  795758 mustload.go:65] Loading cluster: multinode-833232
	I1109 22:12:19.988238  795758 notify.go:220] Checking for updates...
	I1109 22:12:19.988559  795758 config.go:182] Loaded profile config "multinode-833232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:12:19.988570  795758 status.go:255] checking status of multinode-833232 ...
	I1109 22:12:19.989104  795758 cli_runner.go:164] Run: docker container inspect multinode-833232 --format={{.State.Status}}
	I1109 22:12:20.009334  795758 status.go:330] multinode-833232 host status = "Stopped" (err=<nil>)
	I1109 22:12:20.009361  795758 status.go:343] host is not running, skipping remaining checks
	I1109 22:12:20.009376  795758 status.go:257] multinode-833232 status: &{Name:multinode-833232 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 22:12:20.009412  795758 status.go:255] checking status of multinode-833232-m02 ...
	I1109 22:12:20.009745  795758 cli_runner.go:164] Run: docker container inspect multinode-833232-m02 --format={{.State.Status}}
	I1109 22:12:20.028589  795758 status.go:330] multinode-833232-m02 host status = "Stopped" (err=<nil>)
	I1109 22:12:20.028615  795758 status.go:343] host is not running, skipping remaining checks
	I1109 22:12:20.028624  795758 status.go:257] multinode-833232-m02 status: &{Name:multinode-833232-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-833232 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1109 22:13:15.585935  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-833232 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.189587896s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-833232 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-833232
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-833232-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-833232-m02 --driver=docker  --container-runtime=crio: exit status 14 (103.471058ms)

                                                
                                                
-- stdout --
	* [multinode-833232-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-833232-m02' is duplicated with machine name 'multinode-833232-m02' in profile 'multinode-833232'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-833232-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-833232-m03 --driver=docker  --container-runtime=crio: (32.762577895s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-833232
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-833232: exit status 80 (384.140069ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-833232
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-833232-m03 already exists in multinode-833232-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-833232-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-833232-m03: (2.040507749s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.36s)

                                                
                                    
x
+
TestPreload (164.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-638006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1109 22:14:50.727514  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-638006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m21.330187011s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-638006 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-638006 image pull gcr.io/k8s-minikube/busybox: (1.886142095s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-638006
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-638006: (5.88288791s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-638006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1109 22:16:16.647233  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-638006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m12.702839145s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-638006 image list
helpers_test.go:175: Cleaning up "test-preload-638006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-638006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-638006: (2.442430696s)
--- PASS: TestPreload (164.54s)

                                                
                                    
x
+
TestScheduledStopUnix (108.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-261524 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-261524 --memory=2048 --driver=docker  --container-runtime=crio: (31.184850861s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-261524 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-261524 -n scheduled-stop-261524
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-261524 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-261524 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-261524 -n scheduled-stop-261524
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-261524
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-261524 --schedule 15s
E1109 22:18:15.586018  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-261524
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-261524: exit status 7 (91.102045ms)

                                                
                                                
-- stdout --
	scheduled-stop-261524
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-261524 -n scheduled-stop-261524
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-261524 -n scheduled-stop-261524: exit status 7 (88.398594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-261524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-261524
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-261524: (5.394612213s)
--- PASS: TestScheduledStopUnix (108.45s)

                                                
                                    
x
+
TestInsufficientStorage (11.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-741898 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-741898 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.590335135s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ac5219c0-e917-4619-a187-6a9bee117904","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-741898] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7923e4a-b04c-4de6-b698-833996e9637e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17565"}}
	{"specversion":"1.0","id":"a04de532-f78b-48de-9bbe-23261bf37431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"215a6cf8-ddcc-4727-b6e6-83671412cd40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig"}}
	{"specversion":"1.0","id":"e2350ef6-ac4b-450f-8927-8f41b34f9a6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube"}}
	{"specversion":"1.0","id":"92cc5d1f-06dc-47e3-81c2-30b50d106c30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"47a87697-21fc-4c22-8033-a755c5590b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e478938b-acbf-4c5a-9270-3ccb7eb56da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d59c9f22-5d28-4f56-a726-9fde60fe129a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8136e49b-fb20-4af9-b627-720b97ea9095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0eeec36-af88-4139-800f-0d89e522adbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"58ff7626-dae4-4efd-858b-e55204b2fc4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-741898 in cluster insufficient-storage-741898","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"da13b24a-d3b7-4cfd-b4e6-6f5e8f4812ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b58f8b03-2c2e-46d8-8901-d530b85203b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"65661d9e-08f6-4c0d-acbd-0364df81393f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-741898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-741898 --output=json --layout=cluster: exit status 7 (352.244806ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-741898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-741898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 22:19:01.746985  812651 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-741898" does not appear in /home/jenkins/minikube-integration/17565-708188/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-741898 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-741898 --output=json --layout=cluster: exit status 7 (327.937651ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-741898","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-741898","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 22:19:02.077586  812702 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-741898" does not appear in /home/jenkins/minikube-integration/17565-708188/kubeconfig
	E1109 22:19:02.089836  812702 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/insufficient-storage-741898/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-741898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-741898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-741898: (1.994487275s)
--- PASS: TestInsufficientStorage (11.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1109 22:21:16.647534  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.110261649s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-005653
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-005653: (2.608477804s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-005653 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-005653 status --format={{.Host}}: exit status 7 (88.218498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m44.130421035s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-005653 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (116.842618ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-005653] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-005653
	    minikube start -p kubernetes-upgrade-005653 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0056532 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-005653 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-005653 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.928954229s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-005653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-005653
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-005653: (2.872390134s)
--- PASS: TestKubernetesUpgrade (395.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (107.149484ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-041887] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-041887 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-041887 --driver=docker  --container-runtime=crio: (44.522313482s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-041887 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --driver=docker  --container-runtime=crio
E1109 22:19:50.727730  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --driver=docker  --container-runtime=crio: (5.770119602s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-041887 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-041887 status -o json: exit status 2 (539.922364ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-041887","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-041887
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-041887: (2.076634444s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-041887 --no-kubernetes --driver=docker  --container-runtime=crio: (10.837675812s)
--- PASS: TestNoKubernetes/serial/Start (10.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-041887 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-041887 "sudo systemctl is-active --quiet service kubelet": exit status 1 (382.501129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-041887
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-041887: (1.314326014s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-041887 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-041887 --driver=docker  --container-runtime=crio: (7.513628825s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-041887 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-041887 "sudo systemctl is-active --quiet service kubelet": exit status 1 (567.101334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-713444
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestPause/serial/Start (57.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-001567 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1109 22:24:50.727923  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-001567 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (57.561123235s)
--- PASS: TestPause/serial/Start (57.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-001567 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-001567 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.684151863s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.73s)

                                                
                                    
x
+
TestPause/serial/Pause (1.23s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-001567 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-001567 --alsologtostderr -v=5: (1.226544514s)
--- PASS: TestPause/serial/Pause (1.23s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-001567 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-001567 --output=json --layout=cluster: exit status 2 (467.910335ms)

                                                
                                                
-- stdout --
	{"Name":"pause-001567","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-001567","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.47s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-001567 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.92s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-001567 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-001567 --alsologtostderr -v=5: (1.078780805s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-001567 --alsologtostderr -v=5
E1109 22:26:16.646886  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-001567 --alsologtostderr -v=5: (3.396876758s)
--- PASS: TestPause/serial/DeletePaused (3.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-001567
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-001567: exit status 1 (21.865362ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-001567: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-228645 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-228645 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (303.290918ms)

                                                
                                                
-- stdout --
	* [false-228645] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17565
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 22:27:01.477159  851888 out.go:296] Setting OutFile to fd 1 ...
	I1109 22:27:01.477413  851888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:27:01.477440  851888 out.go:309] Setting ErrFile to fd 2...
	I1109 22:27:01.477458  851888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 22:27:01.477758  851888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17565-708188/.minikube/bin
	I1109 22:27:01.478194  851888 out.go:303] Setting JSON to false
	I1109 22:27:01.479292  851888 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":18571,"bootTime":1699550250,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1109 22:27:01.479392  851888 start.go:138] virtualization:  
	I1109 22:27:01.482348  851888 out.go:177] * [false-228645] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1109 22:27:01.484619  851888 out.go:177]   - MINIKUBE_LOCATION=17565
	I1109 22:27:01.486850  851888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 22:27:01.484700  851888 notify.go:220] Checking for updates...
	I1109 22:27:01.491340  851888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17565-708188/kubeconfig
	I1109 22:27:01.493307  851888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17565-708188/.minikube
	I1109 22:27:01.495486  851888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 22:27:01.497482  851888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 22:27:01.500109  851888 config.go:182] Loaded profile config "force-systemd-env-985695": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1109 22:27:01.500214  851888 driver.go:378] Setting default libvirt URI to qemu:///system
	I1109 22:27:01.541592  851888 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1109 22:27:01.541682  851888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 22:27:01.674708  851888 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:48 SystemTime:2023-11-09 22:27:01.664996692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1109 22:27:01.674833  851888 docker.go:295] overlay module found
	I1109 22:27:01.679989  851888 out.go:177] * Using the docker driver based on user configuration
	I1109 22:27:01.682266  851888 start.go:298] selected driver: docker
	I1109 22:27:01.682283  851888 start.go:902] validating driver "docker" against <nil>
	I1109 22:27:01.682381  851888 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 22:27:01.685673  851888 out.go:177] 
	W1109 22:27:01.688210  851888 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1109 22:27:01.690918  851888 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-228645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-228645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-228645"

                                                
                                                
----------------------- debugLogs end: false-228645 [took: 5.365757628s] --------------------------------
helpers_test.go:175: Cleaning up "false-228645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-228645
--- PASS: TestNetworkPlugins/group/false (5.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-703474 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1109 22:29:50.727480  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-703474 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m17.895142171s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-703474 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ddadeff-7f21-4006-8934-496bed8ec489] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ddadeff-7f21-4006-8934-496bed8ec489] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.032352117s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-703474 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-703474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-703474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.016718189s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-703474 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-703474 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-703474 --alsologtostderr -v=3: (12.143444265s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-703474 -n old-k8s-version-703474
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-703474 -n old-k8s-version-703474: exit status 7 (97.719095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-703474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (438.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-703474 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1109 22:31:16.647444  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-703474 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m18.323765124s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-703474 -n old-k8s-version-703474
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (438.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-679349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-679349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m10.288410738s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-679349 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [17845e1f-45d7-4383-a63f-8b30344c558b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [17845e1f-45d7-4383-a63f-8b30344c558b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.036186367s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-679349 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-679349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-679349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063357734s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-679349 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-679349 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-679349 --alsologtostderr -v=3: (12.052058504s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-679349 -n no-preload-679349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-679349 -n no-preload-679349: exit status 7 (100.784514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-679349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (361.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-679349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:33:15.586540  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:34:50.727702  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 22:36:16.646910  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 22:36:18.634882  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:38:15.586383  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-679349 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (6m0.675312022s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-679349 -n no-preload-679349
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (361.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ss4th" [90cf62a3-a49f-4ebc-8734-fc03241030af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023022142s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-ss4th" [90cf62a3-a49f-4ebc-8734-fc03241030af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009073793s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-703474 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-703474 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-703474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-703474 --alsologtostderr -v=1: (1.064929515s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-703474 -n old-k8s-version-703474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-703474 -n old-k8s-version-703474: exit status 2 (362.848509ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-703474 -n old-k8s-version-703474
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-703474 -n old-k8s-version-703474: exit status 2 (466.767794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-703474 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-703474 --alsologtostderr -v=1: (1.136536082s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-703474 -n old-k8s-version-703474
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-703474 -n old-k8s-version-703474
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-166864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-166864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m25.069115263s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kz5np" [f50a48c6-9b68-404b-95ca-2c2d0653af11] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.036730887s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kz5np" [f50a48c6-9b68-404b-95ca-2c2d0653af11] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01963841s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-679349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-679349 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-679349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-679349 --alsologtostderr -v=1: (1.159979095s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-679349 -n no-preload-679349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-679349 -n no-preload-679349: exit status 2 (430.106616ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-679349 -n no-preload-679349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-679349 -n no-preload-679349: exit status 2 (436.386812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-679349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-679349 -n no-preload-679349
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-679349 -n no-preload-679349
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-325381 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:39:50.727219  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-325381 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m20.662897645s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-166864 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [838ec438-bdf6-49b4-bb77-7ce55d74ba58] Pending
helpers_test.go:344: "busybox" [838ec438-bdf6-49b4-bb77-7ce55d74ba58] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [838ec438-bdf6-49b4-bb77-7ce55d74ba58] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.030032985s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-166864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-166864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-166864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.118727831s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-166864 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-166864 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-166864 --alsologtostderr -v=3: (12.076442586s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-166864 -n embed-certs-166864
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-166864 -n embed-certs-166864: exit status 7 (108.327891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-166864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (618.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-166864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:40:38.485216  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.490526  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.500781  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.521042  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.561275  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.641627  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:38.801987  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:39.122438  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:39.762620  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-166864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m17.864040507s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-166864 -n embed-certs-166864
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (618.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-325381 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cf66cf2e-ddd0-4d73-858a-627d119ec116] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1109 22:40:41.043524  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:40:43.605140  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
helpers_test.go:344: "busybox" [cf66cf2e-ddd0-4d73-858a-627d119ec116] Running
E1109 22:40:48.726165  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.039762106s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-325381 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-325381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-325381 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.551472403s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-325381 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-325381 --alsologtostderr -v=3
E1109 22:40:58.967233  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-325381 --alsologtostderr -v=3: (12.429093611s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381: exit status 7 (96.423742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-325381 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-325381 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:41:16.647417  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 22:41:19.447498  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:42:00.408570  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:42:37.770240  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:37.775480  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:37.785728  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:37.806041  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:37.846294  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:37.927483  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:38.087829  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:38.408476  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:39.048969  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:40.329659  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:42.890237  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:48.011319  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:42:58.251525  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:43:15.585725  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
E1109 22:43:18.732212  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:43:22.328898  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:43:59.693301  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:44:33.772825  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 22:44:50.727399  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/functional-133528/client.crt: no such file or directory
E1109 22:45:21.613488  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:45:38.484725  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:46:06.169663  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:46:16.647212  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-325381 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m46.112083132s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nb8pv" [929932a2-f6b0-4a0f-b4ae-a7dc6ccd1b55] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nb8pv" [929932a2-f6b0-4a0f-b4ae-a7dc6ccd1b55] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.032581594s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-nb8pv" [929932a2-f6b0-4a0f-b4ae-a7dc6ccd1b55] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015887591s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-325381 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-325381 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-325381 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381: exit status 2 (378.831992ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381: exit status 2 (369.764975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-325381 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-325381 -n default-k8s-diff-port-325381
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-840777 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:47:37.770489  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:47:39.785163  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-840777 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (46.515083954s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-840777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-840777 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.174517111s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-840777 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-840777 --alsologtostderr -v=3: (1.327740892s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840777 -n newest-cni-840777
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840777 -n newest-cni-840777: exit status 7 (91.426197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-840777 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-840777 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1109 22:48:05.453910  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
E1109 22:48:15.586003  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-840777 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (29.875120649s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840777 -n newest-cni-840777
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-840777 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-840777 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840777 -n newest-cni-840777
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840777 -n newest-cni-840777: exit status 2 (368.351369ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840777 -n newest-cni-840777
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840777 -n newest-cni-840777: exit status 2 (362.481229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-840777 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840777 -n newest-cni-840777
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840777 -n newest-cni-840777
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (47.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (47.500380544s)
--- PASS: TestNetworkPlugins/group/auto/Start (47.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-228645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-brssw" [4c4549dc-4ab5-4149-8df9-345867ff47ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-brssw" [4c4549dc-4ab5-4149-8df9-345867ff47ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.011249995s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1109 22:50:38.484727  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/old-k8s-version-703474/client.crt: no such file or directory
E1109 22:50:40.632505  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.637772  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.648091  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.668366  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.708613  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.789090  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:40.949890  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:41.270176  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:41.911019  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
E1109 22:50:43.192126  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m20.332985594s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ns6hw" [731ba1d3-0cac-4765-8b12-023f3ddf9e1e] Running
E1109 22:50:45.753200  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.041919505s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ns6hw" [731ba1d3-0cac-4765-8b12-023f3ddf9e1e] Running
E1109 22:50:50.874419  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009933109s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-166864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-166864 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-166864 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-166864 -n embed-certs-166864
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-166864 -n embed-certs-166864: exit status 2 (362.479267ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-166864 -n embed-certs-166864
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-166864 -n embed-certs-166864: exit status 2 (363.772033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-166864 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-166864 -n embed-certs-166864
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-166864 -n embed-certs-166864
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1109 22:51:16.647308  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m19.165775157s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-d492k" [56ce27f0-b018-4bd6-8eb9-353702dd494c] Running
E1109 22:51:21.597938  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.047217541s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-228645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fdnz8" [506877ee-8831-4fe3-b0eb-ff85b2495f9f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fdnz8" [506877ee-8831-4fe3-b0eb-ff85b2495f9f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.014111351s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m9.541142365s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dx7f8" [180e4fd2-feed-40f6-b8af-2d4d676375e9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.045206105s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-228645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4j9rb" [07bd18b8-5896-4449-a3a5-e0baf3a3eb73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4j9rb" [07bd18b8-5896-4449-a3a5-e0baf3a3eb73] Running
E1109 22:52:37.770640  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/no-preload-679349/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.024562768s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m29.567121015s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-228645 replace --force -f testdata/netcat-deployment.yaml
E1109 22:53:15.586924  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/ingress-addon-legacy-861900/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2mgms" [1c355940-3375-430b-887a-c73999898c10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2mgms" [1c355940-3375-430b-887a-c73999898c10] Running
E1109 22:53:24.478914  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.009140505s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (68.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1109 22:54:26.122496  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.127769  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.138067  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.158377  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.198654  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.278913  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.439266  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:26.760143  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:27.400281  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:28.681002  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
E1109 22:54:31.241946  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m8.912033139s)
--- PASS: TestNetworkPlugins/group/flannel/Start (68.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-228645 replace --force -f testdata/netcat-deployment.yaml
E1109 22:54:36.362993  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jhs65" [7a55bd64-5d09-4e2a-8d57-fc8d6446c1d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jhs65" [7a55bd64-5d09-4e2a-8d57-fc8d6446c1d5] Running
E1109 22:54:46.604017  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.01078873s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kbvwp" [32a5e978-a6fb-41a6-a833-4910a4613cb4] Running
E1109 22:55:07.084645  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/auto-228645/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.039572403s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-228645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2r6qc" [a8d04972-e3c8-40f0-8b57-7be4db24c957] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2r6qc" [a8d04972-e3c8-40f0-8b57-7be4db24c957] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.016755001s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-228645 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (46.885508101s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-228645 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-228645 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c98dc" [c3f668e4-7774-4de8-9169-23e3531999e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c98dc" [c3f668e4-7774-4de8-9169-23e3531999e3] Running
E1109 22:56:08.319233  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/default-k8s-diff-port-325381/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009230891s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-228645 exec deployment/netcat -- nslookup kubernetes.default
E1109 22:56:16.647021  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/addons-386274/client.crt: no such file or directory
E1109 22:56:19.281874  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.287151  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.297396  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.317702  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.357956  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.438224  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.598573  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:19.919496  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:20.560413  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:21.840563  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:24.402250  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-228645 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.221343683s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-228645 exec deployment/netcat -- nslookup kubernetes.default
E1109 22:56:29.522482  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
E1109 22:56:39.762697  713573 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17565-708188/.minikube/profiles/kindnet-228645/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-228645 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.208368578s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-228645 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-228645 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/307)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-254770 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-254770" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-254770
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-877691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-877691
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-228645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-228645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-228645"

                                                
                                                
----------------------- debugLogs end: kubenet-228645 [took: 4.908554054s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-228645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-228645
--- SKIP: TestNetworkPlugins/group/kubenet (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-228645 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-228645" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-228645

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-228645" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-228645"

                                                
                                                
----------------------- debugLogs end: cilium-228645 [took: 4.971106836s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-228645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-228645
--- SKIP: TestNetworkPlugins/group/cilium (5.18s)

                                                
                                    
Copied to clipboard